Networks can be divided into two categories i.e., those using point-to-point connections and those using broadcast channels. Broadcast channels are sometimes referred to as multi-access channels or random access channels. The protocols used to determine who goes next on a multi-access channel belong to a sublayer of the data link layer called the MAC (Medium Access Control) sublayer. The MAC sublayer is especially important in LANs, nearly all of which use a multi-access channel to communicate. WAN, in contrast, uses point-to-point links except for satellite networks.
There is always a problem deciding how to allocate a single broadcast channel among competing users. Two different schemes are used in this context. These are static and dynamic schemes.
Static Channel Allocation In LAN’s and MAN’s
The traditional way of allocating a single channel, such as a single trunk among multiple competing users, is EDM (frequency division multiplexing). If there are ‘n’ users, the bandwidth is divided into an equal-sized portion. Each user has a private frequency band, and there is no interference between users. When there is only a small and fixed number of users each of which has a heavy load of traffic (data), frequency division multiplexing is simple and efficient allocation mechanism.
However, when the number of senders is large and continuously varying or the traffic is busty (heavy), FDM presents some problems. If the spectrum is cut up into ‘n’ regions and fewer than ‘n’ users are currently interested in communicating, a large piece of the valuable spectrum will be wasted. If more than ‘n’ users want to communicate, some of them will be denied permissions for lack of bandwidth, even if some of the users who have been assigned a frequency band hardly ever transmit or receive anything.
However, even assuming that the number of users could somehow be held constant at ‘n’, dividing the single available channel into static sub-channel is inefficient sometimes. The basic problem is that when some users are idle, their bandwidth is simply lost. They are not using it, and no one else is allowed to use it. Furthermore, in most computer systems, data traffic is extremely busty. Consequently, most of the channels will be idle most of the time.
The same arguments that apply to FDM also apply to TDM (Time Division Multiplexing). Each user is statically allocated every nth time slot. If the user does not use the nth time slot or the allocated slot, it lies idle. Since none of the traditional static channel allocation methods works well with busty traffic, dynamic methods are used.
Dynamic Key Assumption For LAN’s and MAN’s
There are five key assumptions:-
- Station Model:- the model consists of ‘n’ independent stations (computers, telephones). Each with a program or user generates a frame for transmission. The probability of a frame being generated in an interval of length delta t is lambda delta t, where lambda is a constant (the arrival rate of new frames). Once a frame has been generated, the station is blocked and does nothing until the frame has been successfully transmitted.
- Single-channel Assumption:- A single channel is available for all communication. All stations can transmit on it, and all can receive from it. As far as the hardware is concerned, all stations are equivalent, although protocol software may assign priorities to them.
- Collision Assumption:- If two frames are transmitted simultaneously, they overlap in time, and the resulting signal is garbled (noisy or has errors). This event is called a collision. All stations can detect collisions. A collided frame must be transmitted again later. There are no errors other than those generated by collision.
- a) Continuous time:-Frame transmission can begin at any instant. There is no master clock dividing time into discrete intervals.
b) Slotted Time:- Time is divided into discrete intervals (slots). Frame transmission always begin at the start of the slot. A slot may contain 0, 1, or more frames, corresponding to an idle slot, a successful transmission, or a collision.
5. a) Carrier Sense:- Stations can tell if the channel is in use before trying to use it. If the channel is sensed as busy, no station will attempt to use it until it goes idle.
b) No Carrier Sense:- Stations cannot sense the channel before trying to use it. They just go ahead and transmit. Only later can they determine whether or not the transmission was successful.
The first one says that stations are independent and that work is generated at a constant rate. It assumes that each station has some program or user, so while the stations are blocked, no new work is generated. More sophisticated models allowed multi-program stations that can generate work while a station is blocked, but the analysis of this station is much more complex.
The single-channel assumption is the heart of the matter. There is no external way to communicate. The collision assumption is also basic. Also, some LANs such as TOKEN RINGS use a mechanism that eliminates collision. There are two alternative assumptions about time. Either it is continuous, or it is slotted. some systems use one, or some systems use the other. But for a given system, only one of them holds. Similarly, a network can either have a carrier (it refers to electric signal) sensing or not have it. LANs generally have carrier sense, but satellite network does not.