A Seminar on Component Based Computing; Component based Software Engineering
CHAPTER 1
INTRODUCTION
1.1
BACKGROUND OF THE STUDY
Developing cost-effective
and quality products is an important and challenging aspect of software
development. Component-based software development can help developers to efficiently
produce software within the time and budget constraints. The concept of
component-based software engineering (CBSE) is based on the development of
independent and loosely coupled components of the system, by avoiding irrelevant
dependency among system components. CBSE focuses on linkage among different components
in a way that one component can provide services to another through different
interfaces. This way of development of normalize components facilitates rapid
software development.
The field of
software systems has become increasingly intricate and performance specific. To
produce cost-effective systems, organizations often make use of component-based
technologies as an alternative to developing the whole system from scratch. The
objective behind using component-based technologies is to reduce the cost
associated with the development.
Nevertheless, this
field later evolved into a more critical area to reduce dependence on the current
market and to match rapidly emerging consumer requirements. Currently, the
usage of component-based technologies is more frequently motivated towards
curtailing the development costs. More functionality can be produced by
employing this technology and with lesser investment of time and money.
However, novel issues pertaining to dynamic configuration and scalability crop
up when new components are introduced into a system. Some issues are usually
addressed using CBSE. It provides approaches, prototypes and principle
for the programmers who are somewhat associated with the component-based
techniques. Component-based development (CBD) emphasizes on improving
the techniques by creating significant components of utilization. Although,
CBSE research area has been there for the last two decades, but still several
issues remain unresolved. Several CBSE solutions have been achieved by
utilizing rules and techniques from different design procedures, including
component-based techniques.
CBSE is a
recognized field of software engineering. Its strategies and approaches are
based on architecture definition languages (ADLs), middleware,
object-oriented design, software architectures. Nevertheless, the nature of
software is different from industrial products. Therefore, an immediate
interpretation involving rules on the conventional architectural disciplines
into software architectural is not achievable. For instance, understanding the
component is not an issue in the conventional architectural disciplines because
a component is generally intuitively realized and fits efficiently with the
fundamental concepts/principles as well as engineering design and model.
However, the same situation is not with the software components. A common
definition states, “A software component is a unit of composition with
contractually specified interfaces and explicit context dependencies only. A
software component can be deployed independently and is subject to composition
by third party”.
Many prototypes
depend on specific platforms such as Enterprise Java Beans, DCOM, Microsoft
.Net etc. The CBS-specific risks are created as features of the selected components
do not completely fulfill system requirements to provide more features that are
undesirable in a given system. The particular connection among components
performs a vital role throughout the requirements analysis. Adapters and
wrappers are the earliest part of integration methods.
CBSE is considered
by two development processes: the development of reuse components and
integrated components. The ability to measure quality attributes of CBSE
facilitate better understanding, assessment and control the individual risk of
the whole software development lifecycle. Component producers are involved
using implementation, design and maintenance of the components while component
consumers are interested in certain components. CBSE metric approaches not only
measure quality of the component, but also include managing the ambiguity
descriptions and appropriate mathematical property that can fail quality
metrics.
Information
technology is facing enormous challenges such as high demands to meet the
product deadlines with minimum development time and cost. Reusability approach
of the software component is required to optimize software development cost and
time. Many software organizations are adopting CBSE methodologies in order to
meet requirements of the customers to deliver products at competitive cost.
Incorporating the existing components in the software development process can
improve software productivity. Similarly, employing state of the art tools are
also very useful in this scenario. In addition, risk associated with the
software products can be reduced as the use of latest software and other
domain-specific tools increases the possibility of identifying and fixing
errors. CBD approaches are composed of implementing a component into a system
through a set of well-defined interfaces. These components interact with
different components to perform a function of the system. However, the internal
framework of the components and implementation of the interfaces are always
hidden to the outside world.
1.2
PROBLEM STATEMENT
As technology grows by the day,
computing gets much more complex and the sizes get bulkier, there arise
numerous problems that led to the componentization of software systems. These
problems include:
·
The size and complexity of software
increases rapidly as the day goes by.
·
Single products become part of product
families.
·
Software are upgraded after deployment.
·
The time-to-market must decrease
significantly.
·
The cost of products is high and must be
reduced.
1.3 AIM AND OBJECTIVES
The aim of
this research is to discuss generally component based computing and the reasons
why computing should be componentized. In order to accomplish
the goals in the limited time, the ultimate goal is to create reference
architecture based on component based client-server computing. The component technologies discussed are CORBA
and DOCM. The objectives to achieve the goals of this research are defined:
1.
To discuss the historical background of
component Based computing.
2.
To present a general overview of
Component Based computing using client/server computing as a case study
3.
To discuss the client/server computing using
CORBA and DCOM technologies
4.
To discuss frameworks and their roles in client/server
computing
1.4 SIGNIFICANCE OF STUDY
This research is
important because Component-based
software development (CBSD) is creates a system by bringing smaller parts
together in a manageable way in order to reduce complexity and size of systems.
Due to rapid progress in the world of technology today there seem to be
problems associated with system upgrades after deployment which would be made
faster by componentizing these systems so as to enable component reuse and thus
save time, cost and energy that would have been used to build new systems from
the scratch and therefore, be able to meet the needs of the customers in time.
1.5 SCOPE OF STUDY
The study
covers component based software engineering which is a branch of software
engineering. It covers the reviews made by researchers on CBSE based on some
points. It also elaborates more on the frameworks for component based
client/server computing and discusses the technologies involved.
1.6 LIMITATIONS
Poor internet access as well as limited
power supply served as a limitation during this research work. Also, due to the
diversity of the different materials used, the research of this work was very
difficult and stressful to read and understand.
1.7 GLOSSARY
§ Application Programming Interface
(API): This connects a locally-based application to a
cloud-based storage system, so that a user and send data to it and access and
work with data stored in it.
§ Architecture
definition languages (ADLs): They are computer languages
describing the software and hardware architecture of a system.
§ Client:
is a piece of computer hardware or software that accesses a service made
available by a server.
§ Common Object Request Broker
(CORBA): is a software-based interface from the Object
Management Group (OMG) that allows objects (software modules) to communicate
with each other no matter where they are located on a network.
§ Commercial off-the-shelf (COTS): are
components that satisfy the needs of the needs of the purchasing organization
without the need to custom-made or bespoke solutions.
§ Deployment: are all the activities that
makes that makes a software system available for use.
§ Groupware: are software that supports
multiple users working on related tasks in local and remote networks.
§ Graphical User Interface (GUI): is a
type of user interface that allows users to interact with electronic devices
through graphical icons and visual indicators instead of text-based user
interfaces, typed command labels or text navigation.
§ Hypertext Transfer protocol (HTTP): is
an application protocol for distributed, collaborative and hypermedia
information systems.
§ Interface: is a shared boundary across
which two or more separate components of a computer system share information.
§ International Business Machine (IBM):
is an
American multinational technology company in the United States that
manufactures and markets computer hardware, middleware and software.
§ Interoperability: is the ability of computer
systems or software to exchange and make use of information.
§ Interactive Data Language (IDL): is a
programming Language used for data analysis.
§ Server: is a computer program or device
that provides functionality for clients.
§ Secured Socket Layer (SSL): is the
standard security technology for establishing an encrypted link between a web
server and a browser.
1.8 ORGANISATION OF CHAPTERS
This research paper is structured as follows; Chapter
2 presents the literature review of some major works done on Component Based
software Engineering (CBSE). Chapter 3 discusses CBSE using Client/Server
computing as a case study where 2 technologies (CORBA and DCOM) are described.
Finally, chapter 4 concludes the work.
CHAPTER 2
LITERATURE REVIEW
2.1
HISTORICAL BACKGROUND
The idea
that software should be componentized - built from prefabricated components
- first became prominent with Douglas McIlroy's address at the NATO conference on software engineering in Garmisch, Germany, 1968, titled Mass Produced Software Components. The
conference set out to counter the so-called software crisis. McIlroy's subsequent inclusion of pipes and filters into the UNIX operating system was the first implementation of an infrastructure for this idea.
Brad Cox of Stepstone largely defined the modern concept of a software component. He
called them Software ICs and set out to create an infrastructure and
market for these components by inventing the Objective-C programming language. (He summarizes this view in his book Object-Oriented
Programming - An Evolutionary Approach 1986.)
The
software components are used in two different contexts and two kinds: (i) using
components as parts to build a single executable, or (ii) each executable is
treated as a component in a distributed environment, where components
collaborate with each other using internet or intranet communication protocols
for IPC (Inter Process Communications). The above belongs to former kind, while
the below belongs to later kind.
IBM led the
path with their System Object Model
(SOM) in the early 1990s. As a
reaction, Microsoft paved the way for actual deployment of component software with OLE and COM. As of 2010 many successful software component models
exist.
2.2
RELATED WORKS
The
literature review assesses the past and current status of research work in the
area of component based software development. There is relatively little
information published on the component based software development and its
quality improvement. The work done by the earlier researchers in this area are classified
as follows:
i) Component
Based Software Development
ii) Software
Quality and Reliability
iii) Component
Reliability
iv) Genetic
Algorithm
v) Component
Selection
vi) Component
Assembly
vii) Software
Quality Function Deployment
The literature
of above said topics are surveyed and explained in the following sections.
2.2.1 COMPONENT BASED
SOFTWARE DEVELOPMENT
Component
based software development is a process that aims to design and construct
software systems using reusable software components. It focuses on reusing and
adapting existing components as opposed to just coding in a particular style.
CBSD encourages the composing of software systems as opposed to programming
them (Brown and Wallnau 1998).
Componentizing
software had been suggested by Mcllory (1969) as a way of tackling the software
crisis, yet only in the last decade or so has the idea of component-based
software development taken off. Nowadays there is an increasing market place
for Commercial Off-The-Shelf (COTS) components, embodying a buy, don't build
approach to software development (Brooks 1987). The promise of CBSD is not only
reduction in development costs but also component systems are flexible and easy
to maintain due to the intended plug-and-play nature of components.
CBSD
has many advantages. These include more effective management of complexity,
reduced time to market, increased productivity, improved quality, a greater
degree of consistency and a wider range of usability (Brown 2000). Heineman and
Council (2001) suggested that the major goals of CBSD are the provision of
support for the development of systems as assemblies of components, the
development of components as reusable entities, and the maintenance and
upgrading of systems by customizing and replacing their components.
Experience has
shown that component-based development requires a systematic approach to and
focus on the component aspects of software development (Crnkovic and Larsson
2000).
2.2.2 COMPONENT
SELECTION
Various issues
and problems associated with the selection of suitable reusable components have
been addressed by the reuse community.
In
many organizations the selection process is typically not defined in a proper
way. Each project finds its own approach to it, often under shorter schedule
and budget pressure. This leads to deterioration of the quality and
reliability. So the selection of the right reusable component is often a
nontrivial task and requires careful consideration of multiple criteria and
careful balancing among application requirements, technical characteristics and
financial issues.
COTS
systems need to be customized with respect to individual requirements and as a
consequence, the proper selection of COTS components is critical for subsequent
phases of the software development life cycle. Inefficient decisions not only
affect the correctness, quality and dependability of a COTS based application,
but they may also cause substantial cost increases with respect to the
development process and for future maintenance activities (Ruhe 2002, Ruhe
2003).
Kontio
et al (1995) proposed the Off-The-Shelf Option (OTSO) method which takes into
account of requirements specification, organizational infrastructure,
application architecture, project objectives and availability of libraries.
System development using Off-The-Shelf is carried out in six phases, namely:
COTS alternatives, searching, screening, evaluation, analysis, deployment and
assessment. During the evaluation phase, this method compares COTS alternatives
according to their functional and non-functional requirements. The Analytic
Hierarchy Process is used to consolidate the evaluation results for decision
making.
Ncube
(1999) described a method to identify the software component which is called as
Procurement-Oriented Requirements Engineering (PORE). This PORE method guides a
software development team to acquire customer requirements and select COTS
components which satisfy these requirements. It supports an iterative process
of requirements acquisition and COTS software selection.
Alves
(2003) introduced COTS based Requirements Engineering (CRE) which is an
iterative COTS selection method that selects COTS by rejection. It considers
domain coverage, time restriction, cost rating and vendor guarantees in its
evaluation process. The key feature of CRE is the definition and analysis of
non-functional requirements.
COTS-Aware
Requirements Engineering (CARE) process uses softgoals for the purposes of
trading-off requirements for using COTS. It uses elimination method by
analyzing the level of tradeoff required to use a particular COTS. The method
assumes that COTS candidates already exist as the system requirements are under
development. In addition, the method requires an enormous amount of effort and
coordination between stakeholders during the analysis stage (Chung and Cooper
2003).
In
the Comparative Evaluation Process (CEP) COTS selection method addresses both
the functional and quality aspects of COTS. Moreover the method includes
organization policy and credibility of sources of information in the selection
process. CEP COTS evaluation model uses weighted average as well as a
Multi-Criteria Decision Making technique to estimate the performance of the
products (Cavanaugh and Polen 2002).
A
commonly used alternative is the analytic hierarchy process which includes a
method for determining weights and component scores against attributes (Maiden
and Ncube 1998, Kunda 2003). These scores are based on pair-wise comparisons
and thus use the same units even when combining qualitative and quantitative
data.
Baker
et al (2006) present two algorithms for component selection and prioritization
for the next release problem where components will determine special,
additional features or functions of the next release in the evolution of the
software.
The above
literature review shows the importance of software component selection and
evaluation in Component Based Software Development process in the development
stage.
2.2.3 COMPONENT
ASSEMBLY
The integration
of COTS software into highly complex software systems introduces many important
issues. The primary issue is the conflict between cost and complexity in
software integration. The consumers who want to use these systems want reliable
software system within minimum budget and schedule is also the important issue
in component based software.
In
the literatures, several methods have been proposed for dealing with COTS
component assembly. In all of them a key point is the ability to assemble the
components with minimum cost and schedule.
Many
authors have discussed about the difference between software component assembly
and hardware component assembly. Component based software development can help
the software industry to realize the quality and productivity improvement
similar to those achieved in electronic and mechanical component assemblies
(Brown 2000, Szyperski 2003).
Voas
(1998) described that the component based software development approach can
potentially be used to reduce software development costs, increase flexibility,
assemble systems rapidly and reduce the spiral maintenance burden associated
with the support and upgrade of large systems.
Inverardi
and Tivoli (2003) have proposed an approach to the integration problem in the
CBSD setting is to compose system by assuming a well-defined architecture style
in such a way that it is possible to detect the integration anomalies.
Nunn
and Dengo (2002) have described a component architecture which is defined by an
XML schema that specifies the composition of components into a persistent,
suitable and composite application.
According
to the past literatures most of the assembly procedures are using the software
architecture for integration. But there is no step by step procedure to know
about the order of assembly of components which makes perfect assembly in COTS
system. A proper methodology is needed to make high quality software for
present software industries.
2.2.4 SOFTWARE
QUALITY AND RELIABILITY
Generally,
software quality is conformance to requirements because if the software
contains too many functional defects, the basic requirement of providing the
desired function is not met. To increase the software quality and preventing
software errors, the focus must be on comprehensive requirements and software
testing. Software testing is the basic requirement for software reliability
engineering (Whittaker 2000).
Software
reliability engineering is a keystone to total quality management. It provides
a user oriented metric that is correlated with customer satisfaction. It cannot
determine reliability of software based systems without its associated software
reliability measure (Musa 2005).
Software reliability engineering is based on a
solid body of theory (Musa et al 1987) that includes operational profiles,
random process software reliability models, statistical estimation, and
sequential sampling theory. Software personnel have practiced software
reliability engineering extensively over a period dating back to 1973 (Musa and
Iannino 1991). But still it is under research because of new technologies
emerging in software based systems.
Musa
et al (1987) presented exponential distribution for reliability estimation
model for assessing the reliability of individual components based on its field
failure intensity.
Osterweil
et al (1996) concluded in their paper as “The quality of software products is
clearly unacceptably poor. A large and challenging program of long-term
research is needed to support developing the technologies needed”.
Actually,
the literature is jammed to overflowing software reliability models, so the
practitioner should be aware of selecting models for a particular application,
which may be quite suited to the situation. The two most important criteria for
selecting a software reliability model and estimation method for use in
tracking reliability growth are model simplicity and maturity. The model which
is used to predict the reliability should be in simple understandable format by
the software engineers.
2.2.5 COMPONENT
RELIABILITY
CBSD
is touted as the approach to improve application qualities, decrease time to
market, and improving maintainability and reliability. As a result, many
techniques have emerged to estimate and analyze the reliability of component.
Few Models are available for assessing the reliability of individual components
also.
Dolbec
and Shepard (1995) provided a model which estimates the reliability of CBS
system based upon component usage ratio. The component severity analysis along
with this model helps to focus testing effort in areas where best potential
reliability improvements can be achieved. This leads to savings of time and
money.
Krishnamurthy
and Mathur (1997) assessed the reliability of component based software system.
Their approach is based on the test information and test cases. For each test
case the execution path is identified, the path reliability is calculated using
the reliability of the components assuming a series connection. The reliability
of the application is the average estimate of the reliability of each test
path. This approach does not consider component interface faults.
Hamlet
et al (2001) presented a theory which describes how a component developer can
design and test their components to produce measurements that are later used by
system designer to assess the composite system reliability.
May
(2002) derived a new model that describes how test based software reliability
estimation depends on the component structure of the code. The models show how
component test results can be re-used and how this re-use can provide failure
probability estimation for software systems built from COTS components.
Yacoub
et al (2004) proposed a new technique called scenario based reliability
estimation which builds on scenarios used in the analysis phase of component
based system development. In this approach, component dependency are derived to
establish a probabilistic model on which the reliability estimation technique
is developed. This algorithm used to identify critical components, its
interfaces and links to estimate the system reliability.
Shukla
et al (2005) proposed a theoretical framework for assessing the reliability of
component based software system which incorporates test case execution and
output evaluation. The framework requires the user of the component to define
an operational profile for the component and appropriate reliability
parameters.
Comparing
the factors of simplicity, generality and applicability, the software
reliability model proposed by Dolbec and Shepard (1995) is considered as a
favorable model which focuses on the heavily used component in a component
based software system to improve reliability.
2.2.6 SOFTWARE
QUALITY FUNCTION DEPLOYMENT
Software
process improvement has received attention from more and more software
development organizations nowadays. It is a common practice for an
organization, especially a large organization, to select a software engineering
standard and model for its process improvement.
Quality
Function Deployment is a technique used in product and process design to
identify the priorities for customer needs and expectations and to transfer
these priorities into the technical design of the product. The benefits of such
an approach include such deliverables as increased coverage of explicit and
implicit requirements; improved acceptance of product; establishment of
priority product characteristics to support design, development, and testing;
and increased efficiency of product development.
Lamia
(1995) from Carnegie Mellon University gave an overview of integrating QFD with
object oriented software design methodologies. This article puts forth the
proposition that QFD methods for collecting information about the users of a
system, the functions performed by the system, and the quality demands the
users expect of the system can be very effective in the initial stages of
object oriented analysis. A number of QFD-based tabular forms are proposed in
this article which parallel the graphical diagrams used in Object Oriented
Analysis (OOA). He concludes with some suggestions for opportunities to
integrate QFD and OO automated support tools.
Richardson
(1997) presented the importance of improving software process to small software
development companies. He has also presented the difficulties faced by small
companies when implementing such improvements. The author presented a model in
his paper, based on Quality
Function
Deployment, which can be used as tool to aid the implementation of a software
process improvement action plan, and discussed the use of this model in a small
software development company in Ireland.
Liu
et al (2005) applied QFD in software process management and improvement based
on CMM. They have suggested a framework for software process improvement using
QFD. In that, the considered four phases are requirement integration, CMM goal
prioritization, Key practice prioritization and Action plan prioritization. The
advantage of this framework is that the requirements from various perspectives
are incorporated into the final set of actions.
The
basic QFD methodology can also be utilized with common software quality
considerations to create a hybrid software requirements elicitation model.
According to the survey of literature, The House of Quality can be applied to
software design, and the resulting software requirements are diverse in their
scope and coverage. The result is that product acceptance extends beyond basic
functionality to serve as an indicator of reliability, usability and other
customer preferences along with design considerations.
CHAPTER
3
CASE STUDY: FRAMEWORKS OF COMPONENT
BASED CLIENT-SERVER COMPUTING
3.1
INTRODUCTION
This article proposes two frameworks for
client/server computing using distributed objects and also discusses the topics
affecting client/server frameworks. The component technologies discussed are
CORBA, the Object Management Group’s proposal for distributed object framework
and DOCM, Microsoft’s system for creating and using objects on remote machines
while maintaining the common paradigm for interaction among libraries,
applications and system software provided by COM.
3.2
WHAT
IS CLIENT/SERVER COMPUTING
Client/server
computing systems comprises of two logical parts: a server that provides
services to a client and a client that requests services of the server.
Together, the two form a complete computing system with a distinct division of
responsibility.
A
good example of client/server system is a simple automated teller machine (ATM)
network. Users use ATMs as clients to interface with a small sub-server, which
collaborates with a larger server that manages all of the smaller servers.
Here, the sub-servers are servers to the ATMs and clients to the master server.
ATMs provide the user interface and can be customized as required (e.g. for
multilingual support), while the intermediate servers provide the application
logic, such as checking account balances and transferring money between
accounts. The centralized server
provides additional application logic, such as ensuring that concurrent
transactions are handled correctly.
Figure
3.2.1. A traditional client/server system. Clients request services of the
server independently but use the same interface.
3.2.1 CLIENTS
Many
clients have a modern graphical user interface (GUI) that presents each
resource accessible by the user as an independent object; the GUI is usually
provided with the help of the OS so that consistency across multiple
applications is maintained. Clients can also offer services to other clients.
3.2.2
SERVERS
Traditionally,
servers are entities that passively wait for requests from a clients and act on
them, but recent research in this area also incorporates systems fulfilling the
theoretical organization of client/server systems where servers can actively search
for changes in the state of the clients and act accordingly. The server also
encapsulate the provided service to the clients in order to protect its state.
Servers can also divide the task in subtasks and delegate
each subtask to other servers.
3.2.3 MIDDLEWARE
Middleware
is the distributed software required to facilitate client/server interaction. Transparent
access to non-local services and resources distributed across a network is usually
provided through middleware, which serves as a framework for communication
between the client and server portions of a system. Middleware can
be thought of as the networking between the components of a
client/server system; it is what allows the various components to
communicate in a structured manner. Middleware is defined to
include the Application Programming Interfaces (APIs) used by clients to
request a service from a server, the physical transmission of the
request to the network (or the communication of the service
request to a local server), and the resulting transmission of data for
the client back to the network.
Examples of the proliferation
of competing domain-specific standards include:
·
Database Middleware such as
ODBC, SQL, Oracle Glue
·
Groupware Middleware such as
Microsoft Exchange, Lotus notes
·
Internet Middleware such as
HTTP, SSL
·
Object-Oriented Middleware
such as CORBA, DCOM
3.2.4 FAT SERVERS VS FAT CLIENTS
Information
systems specialists dub a part of a system with a disproportionate amount of
functionality “fat”; a “thin” portion of a system is a part with less responsibility
delegated to it [Orfali et al. 1996a]. The server portion of a client/ server
system almost always holds the data, and the client is nearly always responsible
for the user interface; the shifting of application logic constitutes the distinction
between fat clients and fat servers.
Fat
server systems, such as groupware systems and Web servers, delegate more responsibility
for the application logic to the server, whereas fat client systems, such as
most database systems, place more responsibility on the client.
The
use of fat servers has also increased because of their recently exploited efficiencies.
Generally, fat servers are easier to manage, maintain and deploy since the data
and code exist in a centralized location.
3.3.5
N-TIER SYSTEMS
The
canonical client/server model assumes exactly two discrete participants in the
system called a “two-tier system;” the application logic must be in the client
or the server, or shared between the two. It is also possible to have the
application logic reside separately from the user interface and the data, turning
the system into “N-tier system.”
N-tier system provides more flexibility than traditional 2-tier
system because of the separation of the application logic which makes the processes
become more robust since they can operate independently of the clients and
servers. Decoupling the application logic from the
data allows data from multiple sources to be used in a single transaction.
3.3.6 FUNCTIONS AND BENEFITS OF CLIENT/SERVER SYSTEM
Functions
·
Regulate access to shared
resources when multiple clients attempt to access the same resource via server.
·
Provide a standardized
system through which network services can be utilized to provide location transparent
access to all services.
Benefits
·
Natural mapping of
applications into a client/server framework, example is an
electronic phonebook system.
·
Resource intensive
applications can be designed to run on multiple low-cost systems.
3.3 DISTRIBUTED OBJECTS AND COMPONENTS
3.3.1 FROM CLASSICAL OBJECT TO
DISTRIBUTED OBJECT
·
Classical objects are
entities that encapsulate data and a set of operations (methods) that act on
that data.
·
Classical objects do not
exist as separate entities once the program is complied.
·
Distributed objects can
reside anywhere in the network and continue to exist as physical standalone
entities while remaining accessible by other objects.
3.3.2 BENEFITS OF DISTRIBUTED OBJECTS
·
Self-managing distributed
objects take responsibility for their own resources, work across network, and
interact with other objects.
·
They allow applications to
be split up into lightweight pieces that can be executed on separate machines.
·
Distributed objects can
generate events to notify other objects that an action should take place. This
synchronization is very useful.
·
With middleware, we can
ensure interoperation between/among objects.
3.3.3 COMPONENTS
Components
are the smallest self-managing, independent, and useful parts of a system that
works in multiple environments. Such parts may not even be objects; ActiveX controls
provide one such example.
Components are most often distributed objects incorporating
advanced self-management features. Such components rely on robust
distributed–object models so as to maintain transparency of location and
implementation.
3.4 NEW MODELS FOR CLIENT/SERVER COMPUTING
Here, three models will be discussed: Client/Server
using distributed objects (CORBA and DCOM), Client/Server using JAVA and
Client/Server using JAVA and CORBA.
3.4.1
CLIENT/SERVER USING DISTRIBUTED OBJECTS
Due
to market forces, only CORBA and DCOM provide viable long-term solutions to the
challenge of a standardized framework for object-based client/server systems.
3.4.1.1CLIENT/SERVER
WITH CORBA
CORBA
(Common Object Request Broker) is the most ambitious middleware undertaking
ever, which manages every detail of component interoperability, ensuring the
possibility of interaction-based systems that incorporate components from
multiple sources.
The most important part of a CORBA system is the Object Request Broker (ORB) which defines the object model and provides bidirectional location- transparent object access. The ORB is what shields clients from the necessity of dealing with the complexities of remote object communication; the ORB handles all of the difficulties in coordinating the task.
CORBA
uses the ORB to establish a client/server relationship between components. The
ORB intercepts method invocations from client objects and routes them to an
appropriate server.
At
the heart of CORBA’s strong interoperability is the language-neutral Interface
Definition Language (IDL). The IDL, used to specify the services that an object
can provide, was designed to be fully independent of the implementation
language, tools, operating system, and other factors that normally affect
interoperability. Important issues such as error handling are accounted for by
this completely declarative language. The flexibility is achieved at the
expense of equipping each client component with an IDL stub for each server
used.
CORBA supports Dynamic Method
Invocation, handled through Dynamic Invocation Interface (DII). DII allows a
component to learn about the methods of other components at run time.
To accommodate components without
IDL-based stubs, CORBA provides a Dynamic Skeleton Interface (DSI) that binds
incoming method calls for such objects at runtime. Server demands are met via
an Object Adapter, which provides the core run-time functionality required by
servers.
CORBA
specifies two means by which an object can locate another object in a system. The
first is the Naming Service. This service is analogous to the white pages in a
phone book. The second service is the Trader Service, which is like the yellow page.
3.4.1.2
CLIENT/SERVER WITH DCOM
Microsoft
is touting Distributed Component Object Model (DCOM), which first shipped with
WindowsNT 4.0, as the future model for Internet computing; DCOM manifests
itself primarily through the use of ActiveX components, which are DCOM objects.
Furthermore, software releases from Microsoft (such as Visual J++) reveal that
Microsoft is intent on providing a workable platform on which to implement Java
objects. Binding provided with Visual J++ are strong enough
so that ActiveXs written in other languages can be made to look like remote
Java objects.
DCOM
is like CORBA in that it cleanly separates interface from functionality using
an IDL. It uses IDL based on Distributed Computing Environment (DCE). The IDL
is neither CORBA- nor DCE-compliant; this severely limits the potential for
interoperability.
When a reference to a DCOM object is requested, the handle is
arbitrary; a subsequent request for the same object may yield a different
handle connecting the client with an equivalent interface. DCOM supports a
registry of available interfaces for objects to reference, and even provides
information on the meanings and types of parameters that should accompany a
service request.
For a DCOM client to access the methods of an object, it must use
a virtual lookup table to obtain a pointer to that function. Since DCOM objects
have no unique object identification, there are no naming or trading services.
3.4.1.3 CORBA AS
THE DOMINANT MODEL
With DCOM running
only on Microsoft operating systems, there is little question that CORBA is the
more portable system. CORBA is now in its third generation, and the standards
body governing its evolution serves as assurance that it will remain an open
system. In contrast, DCOM is currently a Windows-only technology and it is
apparently governed by its maintainer, Microsoft. Several barriers to porting
DCOM to other platforms exist. CORBA uses universal security mechanism which is
independent of platform and OS-level security while DCOM uses Windows NT
security Model to provide system security.
CORBA provides
seamless integration for the most popular Object Oriented programming languages
while DCOM language support is limited to Java, C, C++ and Visual Basic. CORBA
hides the IDL from the programmer, whereas an understanding of the DCOM IDL is
essential to DCOM programming.
CORBA is a superior technology
and however, the rest of the industry, including giants such as
Hewlett Packard,
IBM, Novell, and Apple, along with quickly moving newcomers such as Netscape,
Oracle, and Java-Soft, are rallying behind CORBA to make it the distributed
object standard.
3.4.2 USING JAVA FOR CLIENT/SERVER
APPLICATIONS
Java supports a mobile code system provided through byte codes, which
solves portability and security problems. Java Virtual Machine translates
the bytecodes into actual machine instructions on-the-fly. In Java, the assisting application is quite
often a Web browser or some other stand-alone application, although operating
systems are probably a better choice for this task. For code to be considered
mobile, it must be portable meaning the code can be run on a variety of
platforms.
Java
provides some core frameworks relevant to Client/Server computing. These
frameworks are:
·
Java Applet Framework- Provides the “basics” needed for Java applets.
·
Java Commerce Framework-
Provides secure monetary transactions.
·
Java Enterprise Framework-
Provides object- and database-access services for distributed systems.
·
Java Media Framework-
Supports 2D and 3D graphics and animation, MIDI, synchronization services, and
audio.
·
Java Security Framework-
Provides support for authentication, digital signatures, and encryption.
·
Java Beans Framework- Allows
flexible event handling across multiple components, discovery of methods
supported by other objects, persistent objects, and user interface allocation
among multiple objects.
3.4.3 CLIENT/SERVER
USING JAVA AND CORBA
Integration of Java and CORBA can provide a platform for universal
network computing. CORBA can replace the current Web-based system
of providing client/server services, which uses HTTP/CGI. Doing this would bring
these significant benefits to Java.
i.
Superior performance and flexibility: ORBA
allows clients to invoke methods on a server directly which is more flexible
than the system supported by HTTP.
ii.
Scalability:
The CORBA ORB can dispatch
incoming client requests to a server of its choosing, allowing load balancing
of client requests.
iii.
Component infrastructure: Using CORBA would allow
communication not only among Java applications, but also among Java
applications and components written in other languages.
iv.
Split Components: using
CORBA with Java would allow Java components to be split into client and server
side components, making the Web client/server model even more attractive.
Some ways in which Java improves on CORBA include:
i.
Simplified code distribution
ii.
Mobile code
iii. Agenting
iv. Superior language feature
v.
Multi-threading
Java and CORBA
complement each other quite well, with CORBA providing a distributed object
infrastructure and Java providing a strong mobile code system. Together, they let
any two objects on the same network communicate under all circumstances.
3.5 FRAMEWORKS
3.5.1 WHAT ARE
FRAMEWORKS?
Frameworks
are tools to help programmers construct software systems structures in the same
way as the analogous real-world system. Frameworks typically provide a way to
manage a system of interacting objects and to develop objects that will
integrate seamlessly into the framework.
The framework is ideally suited for capturing the elements common
to a family of related systems. It
provide a high degree of design reuse for interactive systems composed of
collaborating objects and ensure that a “product” of the framework will work
within it. Thus, frameworks are a valuable tool for ensuring the availability
of object services.
3.5.2 BUSINESS
OBJECTS AS A CLIENT/SERVER FRAMEWORK
Business
objects are self-managing components used to represent key objects or processes
in a real-life system. Business objects are “shippable” products that usually
have a user interface and the ability to cooperate with other objects to meet a
certain user need.
The
most significant advantage of using business objects is the capability to model
accurately the corresponding real-life business processes. Like other
components, business objects should support late binding so they can be
interchanged easily and interact immediately with existing components. Business
objects are relatively easy to develop, since they can be based on CORBA
objects, which already provide a means for inter-component collaboration and
resource management.
Business
objects are reusable. Business objects can be specialized to meet the unique
demands of a business. The Business Object Model Special Interest Group
(BOMSIG) has proposed a standard for business objects. The standard calls for
each business object to be composed of three types of cooperating objects.
i.
Business Logic Object (BLO)
defines how the object reacts to certain events
ii.
Business Process Object (BPO)
helps maintain the business logic for the entire system
iii.
Presentation Objects provide
the user with a representation of the component
A
normal business object is likely to have multiple Presentation Objects, but
usually has one BLO and BPO. Because these three objects are managed by one
object, collaborating components see only one object that provides the
aggregate services of its constituent objects. This three-object construction
can be viewed as a three tier client/server system.
Tier 1: Visual
aspects of a system, usually handled by a client system.
Tier 2: Data for
the object and the application logic required to meaningfully act on it.
Tier 3: Data and
application logic required to integrate the business object with other business
objects and existing systems, such as legacy servers or databases.
CHAPTER 4
CONCLUSION
4.1
SUMMARY
Component
Based Software Development is the latest advance in software development,
promising the possibility of extending the real world approach to create
well-specified parts and top incorporate legacy code “wrapped” as components.
Affordability, reusability, simplicity, reliability,
adaptability, reduced workload are the major scores for its success.
An
attempt has been made to give a clear overview of component-based software development,
by discussing some major activities involved in CBSE. Two main technologies used
for Client/Server computing along with their advantages and limitations are discussed
in this paper which are CORBA and DCOM, covering in depth as they are more
popular in software industry. A comparison of these technologies based on
functionality and mechanism is conducted which will provide the reader an in depth understanding
and guidance in selecting these technologies for CBSD.
4.2
RECOMMENDATION
Based
on the detailed literature review carried out for component based software
engineering, it can be inferred that although some papers touched the
selection, assembly and quality improvement of software components, no one
integrates in the following areas.
i) Study of
behavior of software components in component based software system.
ii)
Identification of proper methodology for the selection of components.
iii) Improved
component assembly process by considering the interaction between the
components.
iv) Analysis of
the reliability of component based software system.
v) Quality
improvement based on user requirement specifications.
Hence, it has
been suggested to carry out the research focusing all of these aspects in
component based software system based on the literature review.
Comments
Post a Comment