舊文件

此處文件僅供參考,請自行考量時效性與適用程度,其他庫藏文件請參考文件頁面
我們亟需您的協助,進行共筆系統搬移、及文件整理工作,詳情請查閱參與我們

使用者討論:Lake

出自 MozTW Wiki


       CCNA: Cisco Certified Network Associate Study Guide Fourth Edition
       Chapter 1  Internetworking
       Possible causes of LAN traffic congestion are:
           * Too many hosts in a broadcast domain
           * Broadcast storms
           * Multicasting
           * Low bandwidth
       Two advantages of using routers in your network:
           * They don’t forward broadcasts by default.
           * They can filter the network based on layer-3 (Network layer) information (i.e., IP address).
       Router 的一些功能
           * Packet switching:Routers work in Layer 3 of the OSI Model. A major function of the router is to route packets between networks
           * Packet filtering:Through the use of access lists, routers can permit and deny traffic using layer 3 and layer 4 packet information.
           * Internetwork communication:The primary purpose of a router is to route traffic between different networks, allowing for internetworking.
       As long as there are 2 or more devices on a LAN segment, the possibility of a collision exists, whether a router is used or not.
         Switches aren’t used to create internetworks, they’re employed to add functionality to an internetwork LAN. The main purpose of a switch is to make a LAN work better— to optimize its performance—providing more bandwidth for the LAN’s users. And switches don’t forward packets to other networks as routers do. Instead, they only “switch” frames from one port to another within the switched network.
       [ Note ] Switches create separate collision domains, but a single broadcast domain. Routers provide a separate broadcast domain for each interface.
       [ Note ] You would use a bridge in a network to reduce collisions within broadcast domains and to increase the number of collision domains in your network, which provides more bandwidth for users.
       [ Note ] Although bridges are used to segment networks, they will not isolate broadcast or multicast packets.
         Hubs operate at layer one, and is used primarily to physically add more stations to the LAN.
         Bridges and switches basically do the same thing—break up collision domains on a LAN. So what this means is that a switch is basically just a multiple-port bridge with more brainpower,
         Media converters work by converting data from a different media type to work with the media of a LAN. It also operates at layer one and provides no means for logical LAN segmentation.
         A reference model is a conceptual blueprint of how communications should take place. It addresses all the processes required for effective communication and divides these processes into logical groupings called layers . When a communication system is designed in this manner, it’s known as layered architecture .
         Software developers can use a reference model to understand computer communication processes and see what types of functions need to be accomplished on any one layer. If they are developing a protocol for a certain layer, all they need to concern themselves with is the specific layer’s functions, not those of any other layer. Another layer and protocol will handle the other functions. The technical term for this idea is binding .
       The primary purpose of all such models, especially the OSI model, is to allow different vendors’ networks to interoperate.
       The Application Layer
         The Application layer is also responsible for identifying and establishing the availability of the intended communication partner, and determining whether sufficient resources for the intended communication exist.
           * World Wide Web (WWW)
           * E-mail gateways
           * Electronic data interchange (EDI)
           * Special interest bulletin boards
           * Internet navigation utilities
           * Financial transaction services
       The Presentation Layer
       It presents data to the Application layer and is responsible for data translation and code formatting.
         This layer is essentially a translator and provides coding and conversion functions. A successful data-transfer technique is to adapt the data into a standard format before transmission. Computers are configured to receive this generically formatted data and then convert the data back into its native format for actual reading (for example, EBCDIC to ASCII). By providing translation services, the Presentation layer ensures that data transferred from the Application layer of one system can be read by the Application layer of another one.
           * PICT
           * TIFF
           * JPEG
           * MIDI
           * MPEG
           * QuickTime
           * RTF
       The Session Layer
         The Session layer is responsible for setting up, managing, and then tearing down sessions between Presentation layer entities. This layer also provides dialogue control between devices, or nodes. It coordinates communication between systems, and serves to organize their communication by offering three different modes: simplex, half duplex, and full duplex. To sum up, the Session layer basically keeps different applications’ data separate from other applications’ data.
           * Network File System (NFS)
           * Structured Query Language (SQL)
           * Remote Procedure Call (RPC)
           * X Window
           * AppleTalk Session Protocol (ASP)
           * Digital Network Architecture Session Control Protocol (DNA SCP)
       The Transport Layer
         The Transport layer segments and reassembles data into a data stream. Services located in the Transport layer both segment and reassemble data from upper-layer applications and unite it onto the same data stream. They provide end-to-end data transport services and can establish a logical connection between the sending host and destination host on an internetwork.
         The Transport layer is responsible for providing mechanisms for multiplexing upper-layer applications, establishing sessions, and tearing down virtual circuits. It also hides details of any network-dependent information from the higher layers by providing transparent data transfer.
       [ Note ] The term “reliable networking” can be used at the Transport layer. It means that acknowledgments, sequencing, and flow control will be used.
       Flow Control
       Data integrity is ensured at the Transport layer by maintaining flow control and by allowing users to request reliable data transport between systems.
         Reliable data transport employs a connection-oriented communications session between systems, and the protocols involved ensure that the following will be achieved:
           * The segments delivered are acknowledged back to the sender upon their reception.
           * Any segments not acknowledged are retransmitted.
           * Segments are sequenced back into their proper order upon arrival at their destination.
           * A manageable data flow is maintained in order to avoid congestion, overloading, and data
       Connection-Oriented Communication
         In reliable transport operation, a device that wants to transmit sets up a connection-oriented communication with a remote device by creating a session. The transmitting device first establishes a connection-oriented session with its peer system, which is called a call setup, or a threeway handshake. Data is then transferred; when finished, a call termination takes place to tear down the virtual circuit.
       Let me sum up the steps in the connection-oriented session—the three-way handshake— pictured in Figure 1.5:
           * The first “connection agreement” segment is a request for synchronization.
           * The second and third segments acknowledge the request and establish connection parameters— the rules—between hosts. The receiver’s sequencing is also requested to be synchronized here, as well, so that a bi-directional connection is formed. �
           * The final segment is also an acknowledgment. It notifies the destination host that the connection agreement has been accepted and that the actual connection has been established. Data transfer can now begin.
         Okay, so what happens when a machine receives a flood of datagrams too quickly for it to process? It stores them in a memory section called a buffer. But this buffering action can only solve the problem if the datagrams are part of a small burst. If not, and the datagram deluge continues, a device’s memory will eventually be exhausted, its flood capacity will be exceeded, and it will react by discarding any additional datagrams that arrive.
       Windowing
       The quantity of data segments (measured in bytes) that the transmitting machine is allowed to send without receiving an acknowledgment for them is called a window.
       [ Note ] Windows are used to control the amount of outstanding, unacknowledged data segments.
         So the size of the window controls how much information is transferred from one end to the other. While some protocols quantify information by observing the number of packets, TCP/IP measures it by counting the number of bytes.
       [ Note ] If a TCP session is set up with a window size of 2 bytes, and during the transfer stage of the session the window size changes from 2 bytes to 3 bytes, the sending host must then transmit 3 bytes before waiting for an acknowledgment instead of the 2 bytes originally set up in the virtual circuit.
       Acknowledgments
         Reliable data delivery ensures the integrity of a stream of data sent from one machine to the other through a fully functional data link. It guarantees that the data won’t be duplicated or lost. This is achieved through something called positive acknowledgment with retransmission— a technique that requires a receiving machine to communicate with the transmitting source by sending an Acknowledgment message back to the sender when it receives data. The sender documents each segment it sends and waits for this acknowledgment before sending the next segment. When it sends a segment, the transmitting machine starts a timer and retransmits if it expires before an acknowledgment is returned from the receiving end.


       The Network Layer
         The Network layer (also called layer 3) manages device addressing, tracks the location of devices on the network, and determines the best way to move data, which means that the Network layer must transport traffic between devices that aren’t locally attached. Routers (layer-3 devices) are specified at the Network layer and provide the routing services within an internetwork.
         It happens like this: First, when a packet is received on a router interface, the destination IP address is checked. If the packet isn’t destined for that particular router, it will look up the destination network address in the routing table. Once the router chooses an exit interface, the packet will be sent to that interface to be framed and sent out on the local network. If the router can’t find an entry for the packet’s destination network in the routing table, the router drops(NOT frame to all connected segments) the packet.
       Two types of packets are used at the Network layer: data and route updates.
       Data packets   Used to transport user data through the internetwork. Protocols used to support data traffic are called routed protocols; examples of routed protocols are IP and IPX. You’ll learn about IP addressing in Chapter 2 and Chapter 3, “IP Subnetting and Variable Length Subnet Masks (VLSM).”
       Route update   Packets Used to update neighboring routers about the networks connected to all routers within the internetwork. Protocols that send route update packets are called routing protocols; examples of some common ones are RIP, EIGRP, and OSPF. Route update packets are used to help build and maintain routing tables on each router.
       Network addresses   Protocol-specific network addresses. A router must maintain a routing table for individual routing protocols because each routing protocol keeps track of a network with a different addressing scheme. Think of it as a street sign in each of the different languages spoken by the residents that live on a particular street.
       Interface    The exit interface a packet will take when destined for a specific network.
       Metric   The distance to the remote network. Different routing protocols use different ways of computing this distance.some routing protocols use something called a hop count (the number of routers a packet passes through en route to a remote network), while others use bandwidth, delay of the line, or even tick count (1/18 of a second).
         A router represents a separate network, it must be assigned unique network identification numbers, and each host on the network connected to that router must use the same network number.
       Here are some points about routers you should really commit to memory:
           * Routers, by default, will not forward any broadcast or multicast packets.
           * �Routers use the logical address in a Network layer header to determine the next hop router to forward the packet to.
           * �Routers can use access lists, created by an administrator, to control security on the types of packets that are allowed to enter or exit an interface.
           * Routers can provide layer-2 bridging functions if needed and can simultaneously route through the same interface.
           * Layer-3 devices (routers in this case) provide connections between virtual LANs (VLANs).
           * Routers can provide quality of service (QoS) for specific types of network traffic.
       The Data Link Layer
         The Data Link layer provides the physical transmission of the data and handles error notification, network topology, and flow control. This means the Data Link layer will ensure that messages are delivered to the proper device on a LAN using hardware addresses, and translates messages from the Network layer into bits for the Physical layer to transmit. The Data Link layer formats the message into pieces, each called a data frame, and adds a customized header containing the hardware destination and source address.
       Figure 1.10 shows the Data Link layer with the Ethernet and IEEE specifications. When you check it out, notice that the IEEE 802.2 standard is used in conjunction with and adds functionality to the other IEEE standards.
         The Network layer, don’t care at all about where a particular host is located. They’re only concerned about where networks are located, and the best way to reach them—including remote ones. Routers are totally obsessive when it comes to networks. And for once, this is a good thing! It’s the Data Link layer that’s responsible for the actual unique identification of each device that resides on a local network.
         Each time a packet is sent between routers, it’s framed with control information at the Data Link layer, but that information is stripped off at the receiving router and only the original packet is left completely intact. This framing of the packet continues for each hop until the packet is finally delivered to the correct receiving host. It’s really important to understand that the packet itself is never altered along the route; it’s only encapsulated with the type of control information required for it to be properly passed on to the different media types.
       The IEEE Ethernet Data Link layer has two sublayers:
         Media Access Control (MAC) 802.3 Defines how packets are placed on the media. Contention media access is “first come/first served” access where everyone shares the same bandwidth— hence the name. Physical addressing is defined here, as well as logical topologies. What’s a logical topology? It’s the signal path through a physical topology. Line discipline, error notification (not correction), ordered delivery of frames, and optional flow control can also be used at this sublayer.
         Logical Link Control (LLC) 802.2 Responsible for identifying Network layer protocols and then encapsulating them. An LLC header tells the Data Link layer what to do with a packet once a frame is received. It works like this: A host will receive a frame and look in the LLC header to find out where the packet is destined for—say, the IP protocol at the Network layer. The LLC can also provide flow control and sequencing of control bits.
       The switches and bridges both work at the Data Link layer and filter the network using hardware (MAC) addresses.
       Switches and Bridges at the Data Link Layer
         Layer-2 switching is considered hardware-based bridging because it uses specialized hardware called an application-specific integrated circuit (ASIC). ASICs can run up to gigabit speeds with very low latency rates.
       [ Note ] Latency is the time measured from when a frame enters a port to the time it exits a port.
         Bridges and switches read each frame as it passes through the network. The layer-2 device then puts the source hardware address in a filter table and keeps track of which port the frame was received on. This information (logged in the bridge’s or switch’s filter table) is what helps the machine determine the location of the specific sending device.
         The real estate business is all about location, location, location, and it’s the same way for both layer-2 and -3 devices. Though both need to be able to negotiate the network, it’s crucial to remember that they’re concerned with very different parts of it. Primarily, layer-3 machines (such as routers) need to locate specific networks, whereas layer-2 machines (switches and bridges) need to eventually locate specific devices. So, networks are to routers as individual devices are to switches and bridges. And routing tables that “map” the internetwork are for routers, as filter tables that “map” individual devices are for switches and bridges.
         After a filter table is built on the layer-2 device, it will only forward frames to the segment where the destination hardware address is located. If the destination device is on the same segment as the frame, the layer-2 device will block the frame from going to any other segments. If the destination is on a different segment, the frame can only be transmitted to that segment. This is called transparent bridging.
         When a switch interface receives a frame with a destination hardware address that isn’t found in the device’s filter table, it will forward the frame to all connected segments(NOT DROP). If the unknown device that was sent the “mystery frame” replies to this forwarding action, the switch updates its filter table regarding that device’s location. But in the event the destination address of the transmitting frame is a broadcast address, the switch will forward all broadcasts to every connected segment by default.
         Another benefit of LAN switching over hub-centered implementations is that each device on every segment plugged into a switch can transmit simultaneously. At least, they can as long as there is only one host on each port and a hub isn’t plugged into a switch port. (Remember, each switch port is its own collision domain.) As you might have guessed, hubs only allow one device per network segment to communicate at a time.
         Each network segment connected to the switch must have the same type of devices attached. What this means to you and me is that you can connect an Ethernet hub into a switch port and then connect multiple Ethernet hosts into the hub, but you can’t mix Token Ring hosts in with the Ethernet gang on the same segment. Mixing hosts in this manner is called media translation


       The Physical Layer
         the Physical layer does two things: It sends bits and receives bits. Bits come only in values of 1 or 0—a Morse code with numerical values. The Physical layer communicates directly with the various types of actual communication media. Different kinds of media represent these bit values in different ways. Some use audio tones, while others employ state transitions—changes in voltage from high to low and low to high. Specific protocols are needed for each type of media to describe the proper bit patterns to be used, how data is encoded into media signals, and the various qualities of the physical media’s attachment interface.
         The Physical layer specifies the electrical, mechanical, procedural, and functional requirements for activating, maintaining, and deactivating a physical link between end systems. This layer is also where you identify the interface between the data terminal equipment (DTE) and the data communication equipment (DCE).
       The Physical layer’s connectors and different physical topologies are defined by the OSI as standards, allowing disparate systems to communicate.
       Hubs at the Physical Layer
         A hub is really a multiple-port repeater. A repeater receives a digital signal and reamplifies or regenerates that signal, and then forwards the digital signal out all active Ports without looking at any data.
       Ethernet networks run a logical bus topology, meaning that the signal has to run from end to end of the network.


       Ethernet Networking
         Ethernet is a contention media access method that allows all hosts on a network to share the same bandwidth of a link. Ethernet is popular because it’s readily scalable,
       Ethernet uses both Data Link and Physical layer specifications
         Ethernet networking uses Carrier Sense Multiple Access with Collision Detection (CSMA/ CD), a protocol that helps devices share the bandwidth evenly without having two devices transmit at the same time on the network medium. CSMA/CD was created to overcome the problem of those collisions that occur when packets are transmitted simultaneously from different nodes. And trust me, good collision management is crucial because when a node transmits in a CSMA/CD network, all the other nodes on the network receive and examine that transmission. Only bridges and routers can effectively prevent a transmission from propagating throughout the entire network!
         So, how does the CSMA/CD protocol work? Like this: When a host wants to transmit over the network, it first checks for the presence of a digital signal on the wire. If all is clear (no other host is transmitting), the host will then proceed with its transmission. But it doesn’t stop there. The transmitting host constantly monitors the wire to make sure no other hosts begin transmitting. If the host detects another signal on the wire, it sends out an extended jam signal that causes all nodes on the segment to stop sending data (think, busy signal). The nodes respond to that jam signal by waiting a while before attempting to transmit again. Backoff algorithms
       determine when the colliding stations can retransmit. If collisions keep occurring after 15 tries, the nodes attempting to transmit will then time out. Pretty clean!
       The effects of having a CSMA/CD network sustaining heavy collisions include:
       * Delay
       * Low throughput
       * Congestion
       [ Note ] Backoff on an 802.3 network is the retransmission delay that’s enforced when a collision occurs.
       Half- and Full-Duplex Ethernet
         It also uses the CSMA/CD protocol to help prevent collisions and to permit retransmitting if a collision does occur. If a hub is attached to a switch, it must operate in half-duplex mode because the end stations must be able to detect collisions.   (switch <----half-duplex ----> hub)
         But full-duplex Ethernet uses two pairs of wires, instead of one wire pair like half duplex. And full duplex uses a point-to-point connection between the transmitter of the transmitting device and the receiver of the receiving device. This means that with full-duplex data transfer, you get a faster data transfer compared to half duplex. And because the transmitted data is sent on a different set of wires than the received data, no collisions will occur.
       Full-duplex Ethernet can be used in three situations:
       * With a connection from a switch to a host
       * With a connection from a switch to a switch
       * With a connection from a host to a host using a crossover cable
       * Hub could not enforce Full-duplex
       [ Note ] Full-duplex Ethernet requires a point-to-point connection when only two nodes are present.
         Now, if it’s capable of all that speed, why wouldn’t it deliver? Well, when a full-duplex Ethernet port is powered on, it first connects to the remote end, and then negotiates with the other end of the FastEthernet link. This is called an auto-detect mechanism. This mechanism first decides on the exchange capability, which means it checks to see if it can run at 10 or 100Mbps. It then checks to see if it can run full duplex, and if it can’t, it will run half duplex.
       [ Note ] Remember that half-duplex Ethernet shares a collision domain and provides a lower effective throughput than full-duplex Ethernet, which typically has a private collision domain and a higher effective throughput.
       Ethernet at the Data Link Layer
         Ethernet at the Data Link layer is responsible for Ethernet addressing, commonly referred to as hardware addressing or MAC addressing. Ethernet is also responsible for framing packets received from the Network layer and preparing them for transmission on the local network through the Ethernet contention media access method. There are four different types of Ethernet frames available:
       * Ethernet_II
       * IEEE 802.3
       * IEEE 802.2
       * SNAP
       Ethernet Addressing
         It uses the Media Access Control (MAC) address burned into each and every Ethernet Network Interface Card (NIC). The MAC, or hardware address, is a 48-bit (6 byte) address written in a hexadecimal format. Figure 1.11 shows the 48-bit MAC addresses and how the bits are divided.
         The organizationally unique identifier (OUI) is assigned by the IEEE to an organization. It’s composed of 24 bits, or 3 bytes. The organization, in turn, assigns a globally administered address (24 bits, or 3 bytes) that is unique (supposedly, again—no guarantees) to each and every adapter they manufacture. Look closely at the figure. The high-order bit is the Individual/Group (I/G) bit. When it has a value of 0, we can assume that the address is actually the MAC address of a device and may well appear in the source portion of the MAC header. When it is a 1, we can assume that the address represents either a broadcast or multicast address in Ethernet, or a broadcast or functional address in TR and FDDI (who really knows about FDDI?). The next bit is the G/L bit (also known as U/L, where U means universal). When set to 0, this bit represents a globally administered address (as by the IEEE). When the bit is a 1, it represents an administratively locally governed address (as in DECnet). The low-order 24 bits of an Ethernet address represent a locally (if anything) administered or manufacturer-assigned code. This portion commonly starts with 24 0s for the first card made and continues in order until there are 24 1s for the last (16,777,216th) card made. You’ll actually find that many manufacturers use these same six hex digits as the last six characters of their serial number on the same card.
       Ethernet Frames
         The Data Link layer is responsible for combining bits into bytes and bytes into frames. Frames are used at the Data Link layer to encapsulate packets handed down from the Network layer for transmission on a type of media access. There are three types of media access methods: contention (Ethernet), token passing (Token Ring and FDDI), and polling (IBM Mainframes and 100VG-AnyLAN).
       The following details the different fields in the 802.3 and Ethernet frame types:


       Preamble   An alternating 1,0 pattern provides a 5MHz clock at the start of each packet, which allows the receiving devices to lock the incoming bit stream.
       Start Frame Delimiter (SFD)/Synch   The preamble is seven octets and the SFD is one octet (Synch). The SFD is 10101011, where the last pair of 1s allows the receiver to come into the alternating 1,0 pattern somewhere in the middle and still sync up and detect the beginning of the data.
       Destination Address (DA)   This transmits a 48-bit value using the least significant bit (LSB) first. The DA is used by receiving stations to determine whether an incoming packet is addressed to a particular node. The destination address can be an individual address, or a broadcast or multicast MAC address. Remember that a broadcast is all 1s (or Fs in hex) and is sent to all devices, but a multicast is sent only to a similar subset of nodes on a network.
       Source Address (SA)   The SA is a 48-bit MAC address used to identify the transmitting device, and it uses the LSB first. Broadcast and multicast address formats are illegal within the SA field.
       Length or Type field   802.3 uses a Length field, but the Ethernet frame uses a Type field to identify the Network layer protocol. 802.3 cannot identify the upper-layer protocol and must be used with a proprietary LAN—IPX, for example.
       Data    This is a packet sent down to the Data Link layer from the Network layer. The size can vary from 64 to 1500 bytes.
       Frame Check Sequence (FCS)   FCS is a field at the end of the frame that’s used to store the cyclic redundancy check (CRC).
         Let’s pause here for a minute and take a look at some frames caught on our trusty Etherpeek network analyzer. You can see that the frame below has only three fields: a Destination, Source, and Type (shown as Protocol Type on this analyzer) field:
       Destination: 	00:60:f5:00:1f:27
       Source: 	00:60:f5:00:1f:2c
       Protocol Type: 	08-00 IP


         This is an Ethernet_II frame. Notice the type field is IP, or 08-00 in hexadecimal. The next frame has the same fields, so it must be an Ethernet_II frame too:
       Destination: 	ff:ff:ff:ff:ff:ff Ethernet Broadcast
       Source: 	02:07:01:22:de:a4
       Protocol Type: 	81-37 NetWare


         I included this one so you could see that the frame can carry more than just IP—it can also carry IPX, or 81-37h. Did you notice that this frame was a broadcast? You can tell because the destination hardware address is all 1s in binary, or all Fs in hexadecimal. Now, pay special attention to the length field in the next frame; this must be an 802.3 frame:
       Flags: 	0x80 802.3
       Status: 	0x00
       Packet Length: 	64
       Timestamp: 	12:45:45.192000 06/26/1998
       Destination: 	ff:ff:ff:ff:ff:ff Ethernet Broadcast
       Source: 	08:00:11:07:57:28
       Length: 	34


         The problem with this frame is this: How do you know which protocol this packet is going to be handed to at the Network layer? It doesn’t specify in the frame, so it must be IPX. Why? Because when Novell created the 802.3 frame type (before the IEEE did and called it 802.3 Raw), Novell was pretty much the only LAN server out there. So, Novell assumed that if you were running a LAN, it must be IPX, and they didn’t include any Network layer protocol field information in the 802.3 frame.
       802.2 and SNAP
         Since the 802.3 Ethernet frame cannot by itself identify the upper-layer (Network) protocol, it obviously needs some help. The IEEE defined the 802.2 LLC specifications to provide this function and more. Figure 1.13 shows the IEEE 802.3 with LLC (802.2) and the Subnetwork Access Protocol (SNAP) frame types.
       Figure 1.13 shows how the LLC header information is added to the data portion of the frame. Now, let’s take a look at an 802.2 frame and SNAP captured from our analyzer.
       802.2 Frame
       The following is an 802.2 frame captured with a protocol analyzer:
       Flags: 	0x80 802.3
       Status: 	0x02 Truncated
       Packet Length: 	64
       Slice Length: 	51
       Timestamp: 	12:42:00.592000 03/26/1998
       Destination 	ff:ff:ff:ff:ff:ff Ethernet Broadcast
       Source: 	00:80:c7:a8:f0:3d
       LLC Length: 	37
       Dest. SAP: 	0xe0 NetWare
       Source SAP: 	0xe0 NetWare Individual LLC
       SublayerManagement Function 	 
       Command: 0x03 Unnumbered Information 	 


         You can see that the first frame has a Length field, so it’s probably an 802.3, right? Maybe. Look again. It also has a DSAP and an SSAP, so it’s not an 802.3. It has to be an 802.2 frame. (Remember—an 802.2 frame is an 802.3 frame with the LLC information in the data field of the header so we know what the upper-layer protocol is.)
       SNAP Frame
         The SNAP frame has its own protocol field to identify the upper-layer protocol. This is really a way to allow an Ethernet_II Ether-Type field to be used in an 802.3 frame. Even though the following network trace shows a protocol field, it is actually an Ethernet_II type (Ether-Type) field:
       Flags: 	0x80 802.3
       Status: 	0x00
       Packet Length: 	78
       Timestamp: 	09:32:48.264000 01/04/2000
       802.3 Header 	 
       Destination 	09:00:07:FF:FF:FF AT Ph 2 Broadcast
       Source: 	00:00:86:10:C1:6F
       LLC Length: 	60
       802.2 Logical Link Control (LLC) Header 	 
       Dest. SAP: 	0xAA SNAP
       Source SAP: 	0xAA SNAP
       Command: 	0x03 Unnumbered Information
       Protocol: 	0x080007809B AppleTalk = Ethernet_II type (Ether-Type) field


         You can identify a SNAP frame because the DSAP and SSAP fields are always AA, and the Command field is always 3. This frame type was created because not all protocols worked well with the 802.3 Ethernet frame, which didn’t have an Ether-Type field. To allow the proprietary protocols created by application developers to be used in the LLC frame, the IEEE defined the SNAP format that uses the exact same codes as Ethernet_II. Up until about 1997 or so, the SNAP frame was on its way out of the corporate market. However, the new 802.11 wireless LAN specification uses an Ethernet SNAP field to identify the Network layer protocol. Cisco also still uses a SNAP frame with their proprietary protocol Cisco Discovery Protocol (CDP)—something I’m going to talk about in Chapter 9, “Managing a Cisco Internetwork.”
       Ethernet at the Physical Layer
         Each Ethernet cable type that is specified by the EIA/TIA has inherent attenuation, which is defined as the loss of signal strength as it travels the length of a cable and is measured in decibels (dB). The cabling used in corporate and home markets is measured in categories. A higher quality cable will have a higher rated category and lower attenuation. For example, category 5 is better than category 3 because category 5 cable has more wire twists per foot and therefore less crosstalk. Crosstalk is the unwanted signal interference from adjacent pairs in the cable.
         Near End Crosstalk (NEXT) is crosstalk measured at the transmitting end of the cable. Far End Crosstalk (FEXT) is measured at the far end from where the signal was injected into the cable. Power Sum NEXT (PSNEXT) is basically a mathematical calculation that simulates all four pairs being energized at the same time.



       Ethernet Cabling
       The types of Ethernet cables available are: �
       * Straight-through cable
       * Crossover cable
       * Rolled cable
       Straight-Through Cable
       The straight-through cable is used to connect:
       * Host to switch or hub
       * Router to switch or hub
         Four wires are used in straight-through cable to connect Ethernet devices. It is relatively simple to create this type; Figure 1.15 shows the four wires used in a straight-through Ethernet cable.
         Notice that only pins 1, 2, 3, and 6 are used. Just connect 1 to 1, 2 to 2, 3 to 3, and 6 to 6, and you’ll be up and networking in no time. However, remember that this would be an Ethernet- only cable and wouldn’t work with Voice, Token Ring, ISDN, etc.
       Crossover Cable
       The crossover cable can be used to connect:
       * Switch to switch
       * Hub to hub
       * Host to host
       * Hub to switch
       * Router direct to host
           The same four wires are used in this cable as in the straight-through cable, but we just connect different pins together. Figure 1.16 shows how the four wires are used in a rossover Ethernet cable. Notice that instead of connecting 1 to 1, etc., here we connect pins 1 to 3 and 2 to 6 on each side of the cable.
    
   Hub , Switch 	type 1
   Pc , Router 	type2


   type 1 	<--straight-through cable--> 	type 1
   type 1 	<--  crossover cable  --> 	type 1
   type2 	<-- crossover cable --> 	type2
       Rolled Cable
         Although rolled cable isn’t used to connect any Ethernet connections together, you can use a rolled Ethernet cable to connect a host to a router console serial communication (com) port. If you have a Cisco router or switch, you would use this cable to connect your PC running Hyper- Terminal to the Cisco hardware. Eight wires are used in this cable to connect serial devices, although not all eight are used to send information, just as in Ethernet networking. Figure 1.17 shows the eight wires used in a rolled cable.
         These are probably the easiest cables to make, because you just cut the end off on one side of a straight-through cable and reverse the end.
       Once you have the correct cable connected from your PC to the Cisco router or switch, you can start HyperTerminal to create a console connection and configure the device.
       Set the configuration as follows:
       1. Open HyperTerminal and enter a name for the connection. It is irrelevant what you name it, but I always just use “Cisco.” Then click OK.
       2. Choose the communications port—either COM1 or COM2, whichever is open on your PC.
        
       3. Now set the port settings. The default values (2400bps and no flow control) will not work; you must set the port settings as shown in Figure 1.18.
       Notice that the bit rate is now set to 9600 and the flow control is set to none. At this point, you can click OK and press the Enter key, and you should be connected to your Cisco device console port.
       Wireless Networking
       * Narrowband Wireless LANs
       * Personal Communication Services (PCS)
       * Narrowband PCS
       * Broadband PCS
       * Satellite
       * Infrared Wireless LAN’s
       * Spread Spectrum Wireless LAN’s
       Data Encapsulation
         Figure 1.20 shows the PDUs and how they attach control information to each layer. This figure demonstrates how the upper-layer user data is converted for transmission on the network. The data stream is then handed down to the Transport layer, which sets up a virtual circuit to the receiving device by sending over a synch packet. The data stream is then broken up into smaller pieces, and a Transport layer header (a PDU) is created and attached to the header of the data field; now the piece of data is called a segment. Each segment is sequenced so the data stream can be put back together on the receiving side exactly as it was transmitted.
         Each segment is then handed to the Network layer for network addressing and routing through the internetwork. Logical addressing (for example, IP) is used to get each segment to the correct network. The Network layer protocol adds a control header to the segment handed down from the Transport layer, and what we have now is called a packet or datagram. Remember that the Transport and Network layers work together to rebuild a data stream on a receiving host, but it’s not part of their work to place their PDUs on a local network segment—which is the only way to get the information to a router or host.
         It’s the Data Link layer that’s responsible for taking packets from the Network layer and placing them on the network medium (cable or wireless). The Data Link layer encapsulates each packet in a frame, and the frame’s header carries the hardware address of the source and destination hosts. If the destination device is on a remote network, then the frame is sent to a router to be routed through an internetwork. Once it gets to the destination network, a new frame is used to get the packet to the destination host.
         To put this frame on the network, it must first be put into a digital signal. Since a frame is really a logical group of 1s and 0s, the Physical layer is responsible for encoding these digits into a digital signal, which is read by devices on the same local network. The receiving devices will synchronize on the digital signal and extract (decode) the ones and zeros from the digital signal. At this point the devices build the frames, run a cyclic redundancy check (CRC), and then check their answer against the answer in the frame’s FCS field. If it matches, the packet is pulled from the frame, and what’s left of the frame is discarded. This process is called de-encapsulation. The
       packet is handed to the Network layer, where the address is checked. If the address matches, the segment is pulled from the packet, and what’s left of the packet is discarded. The segment is processed at the Transport layer, which rebuilds the data stream and acknowledges to the transmitting station that it received each piece. It then happily hands the data stream to the upperlayer application.
       At a transmitting device, the data encapsulation method works like this:
      1. User information is converted to data for transmission on the network.
      2. Data is converted to segments and a reliable connection is set up between the transmitting and receiving hosts.
      3. Segments are converted to packets or datagrams, and a logical address is placed in the header so each packet can be routed through an internetwork.
      4. Packets or datagrams are converted to frames for transmission on the local network. Hardware (Ethernet) addresses are used to uniquely identify hosts on a local network segment.
      5. Frames are converted to bits, and a digital encoding and clocking scheme is used.
       The Cisco Three-Layer Hierarchical Model
       The following are the three layers and their typical functions:
       * The core layer: Backbone
       * The distribution layer: Routing
       * The access layer: Switching
         Each layer has specific responsibilities. Remember, however, that the three layers are logical
         and are not necessarily physical devices.
       The Core Layer
         The core layer is literally the core of the network. At the top of the hierarchy, the core layer is responsible for transporting large amounts of traffic both reliably and quickly. The only purpose of the network’s core layer is to switch traffic as fast as possible. The traffic transported across the core is common to a majority of users. However, remember that user data is processed at the distribution layer, which forwards the requests to the core if needed.
         If there is a failure in the core, every single user can be affected. Therefore, fault tolerance at
       this layer is an issue. The core is likely to see large volumes of traffic, so speed and latency are
       driving concerns here. Given the function of the core, we can now consider some design specifics.
       Let’s start with some things we don’t want to do:
       * � Don’t do anything to slow down traffic. This includes using access lists, routing between virtual local area networks (VLANs), and packet filtering.
       * � Don’t support workgroup access here. � Avoid expanding the core (i.e., adding routers) when the internetwork grows. If performance becomes an issue in the core, give preference to upgrades over expansion.
       Now, there are a few things that we want to do as we design the core. They include the following:
       * Design the core for high reliability. Consider data-link technologies that facilitate both speed and redundancy, such as FDDI, Fast Ethernet (with redundant links), or even ATM.
       * Design with speed in mind. The core should have very little latency.
       * Select routing protocols with lower convergence times. Fast and redundant data-link connectivity is no help if your routing tables are shot!
       The Distribution Layer
         The distribution layer is sometimes referred to as the workgroup layer and is the communication point between the access layer and the core. The primary functions of the distribution layer are to provide routing, filtering, and WAN access and to determine how packets can access the core, if needed. The distribution layer must determine the fastest way that network service requests are handled—for example, how a file request is forwarded to a server. After the distribution layer determines the best path, it forwards the request to the core layer if needed. The core layer then quickly transports the request to the correct service. The distribution layer is the place to implement policies for the network. Here you can exercise considerable flexibility in defining network operation. There are several actions that generally should be done at the distribution layer. They include the following:
       * Routing
       * Implementation of tools such as access lists, of packet filtering, and of queuing
       * Implementation of security and network policies, including address translation and firewalls
       * Redistribution between routing protocols, including static routing
       * Routing between VLANs and other workgroup support functions
       * Definitions of broadcast and multicast domains
       Things to avoid at the distribution layer are limited to those functions that exclusively belong to one of the other layers.
       The Access Layer
         The access layer controls user and workgroup access to internetwork resources. The access layer is sometimes referred to as the desktop layer. The network resources most users need will be available locally. The distribution layer handles any traffic for remote services. The following are some of the functions to be included at the access layer:
       * Continued (from distribution layer) access control and policies
       * Creation of separate collision domains (segmentation)
       * Workgroup connectivity into the distribution layer
       Technologies such as DDR and Ethernet switching are frequently seen in the access layer. Static routing (instead of dynamic routing protocols) is seen here as well. As already noted, three separate levels does not imply three separate routers. There could be fewer, or there could be more. Remember, this is a layered approach.


       Exam Essentials
         Remember the possible causes of LAN traffic congestion. Too many hosts in a broadcast domain, broadcast storms, multicasting, and low bandwidth are all possible causes of LAN traffic congestion.
         Understand the difference between a collision domain and a broadcast domain. A collision domain is an Ethernet term used to describe a network collection of devices in which one particular device sends a packet on a network segment, forcing every other device on that same segment to pay attention to it. A broadcast domain is where a set of all devices on a network segment hear all broadcasts sent on that segment.
         Understand the difference between a hub, a bridge, a switch, and a router. Hubs create one collision domain and one broadcast domain. Bridges break up collision domains but create one large broadcast domain. They use hardware addresses to filter the network. Switches are really just multiple port bridges with more intelligence. They break up collision domains but create one large broadcast domain by default. Switches use hardware addresses to filter the network. Routers break up broadcast domains (and collision domains) and use logical addressing to filter the network.
       Remember the Presentation layer protocols. PICT, TIFF, JPEG, MIDI, MPEG, QuickTime, and RTF are examples of Presentation layer protocols.
         Remember the difference between connection-oriented and connectionless network services. Connection-oriented uses acknowledgments and flow control to create a reliable session. More overhead is used than in a connectionless network service. Connectionless services are used to
       send data with no acknowledgments or flow control. This is considered unreliable.
         Remember the OSI layers. You must remember the seven layers of the OSI model and what function each layer provides. The Application, Presentation, and Session layers are upper layers and are responsible for communicating from a user interface to an application. The Transport layer provides segmentation, sequencing, and virtual circuits. The Network layer provides logical network addressing and routing through an internetwork. The Data Link layer provides framing and placing of data on the network medium. The Physical layer is responsible for taking ones and zeros and encoding them into a digital signal for transmission on the network segment.
         Remember the types of Ethernet cabling and when you would use them. The three types of cables that can be created from an Ethernet cable are: straight-through (to connect a PC’s or a router’s Ethernet interface to a hub or switch), crossover (to connect hub to hub, hub to switch, switch to switch, or PC to PC), and rolled (for a console connection from a PC to a router or switch).
         Understand how to connect a console cable from a PC to a router and start HyperTerminal. Take a rolled cable and connect it from the COM port of the host to the console port of a router. Start HyperTerminal and set the BPS to 9600 and flow control to None.
       Remember the three layers in the Cisco three-layer model. The three layers in the Cisco hierarchical model are the core, distribution, and access layers.
       Answers to Review Questions
       Protocol Data Units are used to define data at each layer of the OSI model. PDUs at the Network layer are called packets.
       When a virtual circuit is created, windowing is used for flow control and acknowledgment of data.
       Data is encapsulated with a media access method at the Data Link layer, and the Protocol Data Unit (PDU) is called a frame.
       The only layer of the OSI model that can actually change data is the Presentation layer.
       Baseband signaling is a technique that uses the entire bandwidth of a wire when transmitting. Broadband wiring uses many signals at the same time on a wire. These are both considered Ethernet signaling types.
       10BaseT and 100BaseT have a distance limitation of 100 meters.
       Hubs cannot run full-duplex Ethernet. Full duplex must be used on a point-to-point connection between two devices capable of running full duplex. Switches and hosts can run full duplex between each other.
       From a COM port of a PC or other host, connect a rolled cable to the console port of the router, start HyperTerminal, set the BPS to 9600 and flow control to None, then press Enter to connect.
       No collision on a point-to-point full-duplex Ethernet segment should occur, and fullduplex Ethernet should be faster then half-duplex Ethernet.
       Testking



       如果是broadcast address在network mask後都是1
       例如192.168.24.59 /30  59=001110|11








   by lake 20050204
個人工具