Today, the two-tier client/server architecture is still the widest implemented computer architecture. An application is considered two-tier when database services are detached from an application so that they can run independently on another computer (Fastie 1999). The “two-tier” being described in this computing model pertains to the database server tier and the client tier. Application logic and processing tasks are shared between these two tiers (Fastie 1999).
Client/server software architectures were developed in the 1980s based on the file server architecture design (Sadoski 1997). They are able to improve the usability, scalability, and flexibility in computer sharing infrastructure (Schussel 1996). Moreover, the architecture needed only small operator intervention.
For many years, the set-up of this model seemed ideal – a Windows/GUI-based PC manages the application and presentation activities at the client level, whereas the server accesses the database. Such scenario seemed to be suitable for Internet application. Several advantages are offered by the fact that data are being managed and stored in a central location. First, every item of data stored in the central location can be accessed and worked on by all users. Second, security and business rules may be defined only once by the server, and, thus, be enforced in all users. Next, relational database servers optimize network traffic through returning only the needed data of the application. Then, costs of hardware are minimized. Finally, maintenance tasks including backing up and/or restoring data are cut down since they can now focus only on the central server.
However, there are many disadvantages which exist for the two-tier client/server models. With these drawbacks, the suitability of this model to the World Wide Web is, hence, questioned.
Two-tier client/server software architectures may only be used in processing information that is not time-critical and where both the system’s management and operations are not complicated (Sadoski 1997). The systems in which this design is used should only have light transaction load. This type of architecture works well only in environments where the processing/business rules do not change very often (Sadoski 1997). Furthermore, users should not exceed 100 in order to maintain maximum efficiency.
Beyond 100 users, the performance capability of the software architecture is exceeded. This results in network saturation (Schussel 1996). The reason behind this limitation is the fact that the client and the server continuously exchange “keep alive” connection messages, even in idle situations. This usage consideration is crucial since most Internet Service Providers (ISPs) cater to users exceeding such number. In any case, two-tier client/servers may fit best the small businesses.
Two-tier computer architectures may also be hard to maintain and administer because when computer applications are located in the client, all upgrades need to be delivered (downloaded) and installed on each client. For each of this activity, the administrative load is increased. Add to this the lack of uniformity in the configurations of the client and the lack and/or absence of control of the server over succeeding changes in those configurations (Sadoski 1997).
There is another problem with two-tier client/server architectures regarding batch jobs. It is a fact that this design is not effective in executing batch programs. In most instances, the client becomes tied up until the batch job ends. This happens even when the server itself executes the job; leading to negative results for both the client users and the batch job (Edelstein 1994).
The World Wide Web is a distributed information system based on hypertext. It is general knowledge that hypertext displays information containing hyperlinks (i.e., one-click “links”/ references to other information on the Internet or any other system). As with any interactive computer applications, hypertext systems include an interface allowing the user to choose a node, read it, and move from there to one of the linked nodes (Dillon et al. 1996).
The practice of navigating through hypertext and/or hyperlinks in the Internet is more popularly known as “surfing” (the Web). A client user may open several of these links simultaneously – and there is most certainly more than one client user who might be opening the same links. Moreover, web-navigation does not only include opening links and being passive about the presentation of information. Due to the added features of the World Wide Web such as those in gaming, chatting, e-mailing, and in music, animation, and video viewing, the client users become active and partake in many activities (and these also include downloading of software to client PCs) in the Web. A two-tier client/server computer architecture with usage considerations such as mentioned above will not be able to handle effectively and efficiently such workload. This may even cause the client PC to “crash.”
The alternative computer architecture that this writer proposes will still be of the client/server type; considering its characteristics being versatile, message-based, and modular (Sadoski 1997). It remains a fact that compared to time-sharing, mainframe, and centralized computing, the client/server model has more advantages to offer. First, it has improved usability (with its forms-based and user-based interface). Second, it has improved flexibility (allowing data sharing). Next, there is improved interoperability [i.e., the ability of more than two computer systems/components “to exchange information and to use the information that has been exchanged” (IEEE 1990)]. Finally, the client/server model has improved scalability (i.e., the convenience with which the component or system can be altered in order to fit the problem area) (Sadoski 1997).
I do not propose to go back to mainframe architecture because such design cannot easily support GUIs or Graphical User Interfaces nor can it readily access multiple databases from sites that are geographically dispersed (as needed in the World Wide Web) (Sadoski 1997). I cannot also recommend file sharing architecture because it can only work for up to 12 simultaneous users (even lesser than the 100-client user capacity of the two-tier client/server architecture).
As it turns out, the client/server computer software architecture remains the most suitable model of computer architecture for the World Wide Web.
I, therefore, recommend the three-tier computer software architecture in replacement of the two-tier one. Three-tier software architectures (also known as multi-tier architectures) are able to accommodate more than 100 client users. These applications break all the three program layers of computer architecture, namely, presentation layer, business logic, and services, into three sections that are independent from each other (Fastie 1999).
By adding a middle-tier to the application system, easier application execution, database staging, and queuing of tasks become possible (Sadoski 1997). The middle tier is able to improve the performance of the application for thousands of number of client users. It further improves the flexibility of the set-up in comparison to two-tier applications. Aside from these, the known characteristics of the two-tier application such as usability, flexibility, and scalability are further improved. Add to those the increased performance, maintainability, flexibility, and reusability of three-tier computer software architectures. All these brilliant qualities of the architecture are present while hiding the complex distributing process from the client user (Sadoski & Comella-Dorda 1997).
In conclusion, this writer believes that since the World Wide Web is based on hypertext and that Internet Service Providers (Internet servers) provide for thousands of clients, the two-tier client/server model, which has much limitations, is not entirely suitable for this type of application. The three-tier client/server computer software architecture is recommended as the best alternative to fit the Internet application’s demands.
References:
Dillon, Andrew, Jarmo J. Levonen, Jean-Francois Rouet, and Rand J. Spiro 1996, Hypertext and Cognition, European Association for Research on Learning and Instruction Conference Aix-en-Provence, France, Lawrence Erlbaum Associates, Mahwah, NJ.
Edelstein, Herb 1994, ‘Unraveling Client/Server Architecture,’ DBMS 7, 5, vol. 34, no. 7.
Fastie, Will 1999, ‘Enterprize Computing’, PC Magazine, 9 February 1999, pp. 229-230.
Heller, R.S. 1990, ‘The Role of Hypermedia in Education: A Look at the Research Issues’, Journal of Research on Computing in Education, vol. 22, no. 4, pp. 431-441.
Institute of Electrical and Electronics Engineers 1990, IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries, New York.
Microsoft Corporation 1998, Client/Server Architecture, viewed 13 August 2004,
Sadoski, Darleen GTE 1997, Client/Server Software Architectures – An Overview, viewed 13 August 2004,
Sadoski, Darleen GTE & Santiago Comella-Dorda SEI 1997, Three Tier Software Architectures, viewed 13 August 2004,
Sadoski, Darleen GTE 1997, Two Tier Software Architectures, viewed 13 August 2004, <>.
Schussel, George 1996, Client/Server Past, Present, and Future [online], viewed 13 August 2004,
No comments:
Post a Comment