Objective 2.1: Recognize the effect on each of the following characteristics of two tier, three tier and multi-tier architectures: scalability, maintainability, reliability, availability, extensibility, performance, manageability, and security
Objective 2.2: Recognize the effect of each of the following characteristics on J2EE technology: scalability, maintainability, reliability, availability, extensibility, performance, manageability, and security
Performance
Performance involves minimizing the response time for a given transaction load.
Performance can be managed by controlling expensive calls and identifying bottlenecks, and taking these factors into account in the design of a system.
For example, resource pooling and minimizing the number of network calls by fetching lots of data at once can help increasing performance.
Performance is positivily influenced by both horizontal and vertical scalability as well.
On an architectural level, Load distribution and load balancing are two techniques that can be used in order to increase the performance of a system.
Effects
1 tier:
 OK
									2 tier:
 Poor. The DB server can become a bottleneck, because each client requires a connection (no pooling).Also, the network can become a bottleneck, because all data has to travel to the client.
n tier:
 Good. Because of scalability, performance can be influenced by choosing the right system components. 
Bottlenecks can be removed at the relevant layer.
									J2EE:
 J2EE enhances performance by means of resource pooling, connection pooling, load balancing and the support for scalability.
									
Load distribution
 
Load distribution is the process of allocating workload amongst a set of processing elements.
Load balancing
Load balancing is the process of transferring units of work among processing elements during execution to maintain balance across these processing elements.
DNS Round Robin
A load distribution technique where client calls are distributed sequentially among the servers in the architecture.
This means that call 1 is directed to server 1, call 2 is directed to server 2 etc.
When no server is left, the process starts all over again.
Reverse proxy load balancing
A load balancing technique where different servers are used to handle certain tasks.
For example, a powerful server is used to handle costly SSL sessions, where a lighter server is used to handle static html.