Open Systems Interconnection (OSI) was set up as an international standard for network architecture. ISO took the initiative in setting up OSI. OSI has two meanings. It refers to:
- Protocols that are authorized by ISO
- OSI basic reference model
The OSI reference model divides the required functions of the network architecture into several layer’s and defines the function of each layer. Layering the communications process means breaking it down into smaller and easier-to-handle, inter-dependent categories, with each solving an important and distinct aspect of the data exchange process. The objective of this detail is to develop an understanding of the complexity and sophistication that the is technology has achieved, in addition to developing the concept for the inner workings of the various components that contribute to the data communications process.
Physical Data Encoding
Information exchanged between two computers is physically carried out by means of electrical signals assuming certain coding methods. These methods can be characterized by changing voltage levels, current levels, frequency of transmission, phase changes, or any combination of these physical aspects of electrical activity. For two computers to reliably exchange data, they must have a compatible implementation of encoding and interpreting data carrying electrical signals. Over time, network vendors have defined different standards for encoding data on the wire.
This deals with the type of media used (fiber, copper, wireless, and so on), which is dictated by the desirable bandwidth, immunity to noise and attenuation properties. These factors affect the maximum allowable media length while still achieving a desirable level of guaranteed data transmission.
Data Flow Control
Data communication processes allocate memory resources, commonly known as communication buffers, for the sake of transmission and reception of data. A computer whose communications buffers become full while still in the process receiving data runs the risk of discarding extra transmissions and losing data unless a data flow control mechanism is employed as shown in Figure. A proper data flow control technique calls on the receiving process to send a ‘stop sending’ signal to the sending computer whenever it cannot cope with the rate at which data is being transmitted. The receiving process later sends a ‘resume sending’ signal when data communication buffers become available.
Data Frame Format
For exchanging information between computers, communication processes have the following requirements:
- Receiving computer must be capable of distinguishing between information carrying signal and mere noise.
- There should be a mechanism to detect whether the information carrying signal is intended for itself or some other computer on the network, or a broadcast (a message that is intended for all computers on the network).
- If the receiving end engages in the process of recovering data from the medium, it should be able to recognize where the data train intended for the receiver ends. After this determination is made, the receiver should discard subsequent signals unless it can determine that they belong to a new, impending transmission.
- The receiving end, after receiving the complete information, must also be capable of dealing with and recognizing the corruption, if any, introduced in the information due to noise or electromagnetic interference.
To accommodate the above requirements, data is delivered in well-defined packages called data frames as shown in Figure. The receiving end compares
the contents of this data frame. If the comparison is favorable, the contents of the information field are submitted for processing. Otherwise, the entire frame is discarded. The primary
concern of the receiving process is the reliable recovery of the information embedded in
As networks grow in size, so does the traffic imposed on the wire, which in turn impacts the overall network performance, including responses. To alleviate such degradation, network specialists resort to breaking the network into multiple networks that are interconnected by specialized devices, including routers, bridge, routers and switches as shown in Figure.
The routing approach calls for the implementation of various cooperative processes, in both routers and workstations, whose main concern is to allow the intelligent delivery of data to its ultimate destination. Data exchange can take place between any two workstations, whether or not they belong to the same network.
Network Address and Complete Address
In addition to the data link address, which should be unique for each workstation on a particular physical network, all workstations must have a higher-level address in common. This is known as the network address. The network address is very similar in function and purpose to the concept of a street name. A street name is common to all residences located on that street.
Unlike data link addresses, which are mostly hardwired on the network interface card, network addresses are software configurable. It should also be noted that the data Structure and the rules of assigning network addresses vary from one networking technology to another.
Inter-process Dialog Control
When two applications engage in the exchange of data, they establish a session between them. Consequently, a need arises to control the flow and the direction of data flow between them for the duration of the session. Depending on the nature of the involved applications, the dialog type might have to be set to full duplex, half-duplex, or simplex mode of communication. Even after setting the applicable communications mode, applications might require that the dialog itself be arbitrated. For example, in the case of half-duplex communications, it is important that somehow applications know when to talk and for how long.
Another application-oriented concern is the capability to reliably recover from failures at a minimum cost. This can be achieved by providing a check mechanism, which enables the resumption of activities since the last checkpoint. As an example, consider the case of invoking a file transfer application to have five files transferred from point A to point B on the network. Unless a proper check mechanism is made to take care of the process, a failure of some sort during the transfer process might require the retransmission of all five files, regardless of where in the process the failure took place. Check pointing circumvents this requirement by retransmitting only the affected files, saving time and bandwidth.
Whenever two or more communicating applications run on different platforms, another concern arises about the differences in the syntax of the data they exchange. Resolving these differences requires an additional process. Good examples of presentation problems are the existing incompatibilities between the ASCII and EBCDIC standards of character encoding, terminal emulation incompatibilities, and incompatibilities due to data encryption techniques.