EVOLUTION OF OO METHODOLOGY



The earliest computers were programmed in machine language using 0 and 1. The mechanical switches were used to load programs. Then, to provide convenience to the programmer, assembly language was introduced where programmers use pneumonic for various instructions to write programs. But it was a tedious job to remember so many pneumonic codes for various instructions. Other major problem with the assembly languages is that they are machine architecture dependent.
To overcome the difficulties of Assembly language, high-level languages came into existence. Programmers could write a series of English-like instructions that a compiler or interpreter could translate into the binary language of computers directly.
These languages are simple in design and easy to use because programs at that time were relatively simple tasks like any arithmetic calculations. As a result, programs were pretty short, limited to about a few hundred line of source code. As the capacity and capability of computers increased, so did the scope to develop more complex computer programs. However, these languages suffered the limitations of reusability, flow control (only goto statements), difficulty due to global variables, understanding and maintainability of long programs.

Structured Programming
When the program becomes larger, a single list of instructions becomes unwieldy. It is difficult for a programmer to comprehend a large program unless it is broken down into smaller units. For this reason languages used the concept of functions (or subroutines, procedures, subprogram) to make programs more comprehensible.
A program is divided into functions or subroutines where each function has a clearly defined purpose and a defined interface to the other functions in the program. Further, a number of functions are grouped together into larger entity called a module, but the principle remains the same, i.e. a grouping of components that carry out specific tasks. Dividing a program into functions and modules is one of the major characteristics of structured programming.
By dividing the whole program using functions, a structured program minimizes the chance that one function will affect another. Structured programming helps the programmer to write an error free code and maintain control over each function. This makes the development and maintenance of the code faster and efficient.
Structured programming remained the leading approach for almost two decades. With the emergence of new applications of computers the demand for software arose with many new features such as GUI (Graphical user interface). The complexity of such programs increased multi-fold and this approach started showing new problems.
The problems arose due to the fundamental principle of this paradigm. The whole emphasis is on doing things. Functions do some activity, maybe a complex one, but the emphasis is still on doing. Data are given a lower status. For example in banking application, more emphasis is given to the function which collects the correct data in a desired format or the function which processes it by doing some summation, manipulation etc. or a function which displays it in the desired format or creates a report. But you will also agree that the important part is the data itself.
The major drawback with structured programming are its primary components, i.e., functions and data structures. But unfortunately functions and data structures do not model the real world very well. Basically to model a real world situation data should be given more importance. Therefore, a new approach emerges with which we can express solutions in terms of real world entities and give due importance to data.
Object Oriented programming
The world and its applications are not organized as functions and values separate from one another. The problem solvers do not think about the world in this manner. They always deal with their problems by concentrating on the objects, their characteristics and behavior. The world is Object Oriented, and Object Oriented programming expresses programs in the ways that model how people perceive the world. Shows different real world objects around us which we often use for performing different functions. This shows that problem solving using the objects oriented approach is very close to our real life problem solving techniques.
The basic difference in Object Oriented programming (OOP) is that the program is organized around the data being operated upon rather than the operations performed. The basic idea behind OOP is to combine both, data and its functions that operate on the data into a single unit called object. Now in our next section, we will learn about the basic concepts used extensively in the Object Oriented approach.

Share:

EVOLUTION OF SOFTWARE ENGINEERING



Any application on computer runs through software. As computer technologies have changed tremendously in the last five decades, accordingly, the software development has undergone significant changes in the last few decades of 20th century. In the early years, the software size used to be small and those were developed either by a single programmer or by a small programming team. The program development was dependent on the programmer’s skills and no strategic software practices were present. In the early 1980s, the size of software and the application domain of software increased. Consequently, its complexity has also increased. Bigger teams were engaged in the development of Software. The software development became more bit organised and software development management practices came into existence.
In this period, higher order programming languages like PASCAL and COBOL came into existence. The use of these made programming much easier. In this decade, some structural design practices like top down approach were introduced. The concept of quality assurance was also introduced. However, the business aspects like cost estimation, time estimation etc. of software were in their elementary stages.
In the late 1980s and 1990s, software development underwent revolutionary changes. Instead of a programming team in an organisation, full-fledged software companies evolved (called software houses). A software houses primary business is to produce software. As software house may offer a range of services, including hiring out of suitably qualified personnel to work within client’s team, consultancy and a complete system design and development service. The output of these companies was ‘Software’. Thus, they viewed the software as a product and its functionality as a process. The concept of software engineering was introduced and Software became more strategic, disciplined and commercial. As the developer of Software and user of Software became separate organisation, business concepts like software costing, Software quality, laying of well-defined requirements, Software reliability, etc., came into existence. In this phase an entirely new computing environments based on a knowledge-based systems get created. Moreover, a powerful new concept of object oriented programming was also introduced.
The production of software became much commercial. The software development tools were devised. The concept of Computer Aided Software Engineering (CASE) tools came into existence. The software development became faster with the help of CASE tools.
Share:

Characteristics of a Web Application




In this section, we will look at the overall architecture and deployment context of web based applications and see how this leads to their peculiar management challenges. Unlike conventional applications that can be monolithic, web applications are by their very nature amenable to layering. One cannot have a monolithic web application as the client and the rest of the application have to be of necessity separated. In principle, one could have a thick client application with a lot of intelligence embedded at the client end, but that would defeat the very purpose of delivering an application over the web. The idea is to have a client that is common to all users so that anybody can access the application without having to do anything special. This client is typically a browser of some sort that can render a HTML page on the user’s screen.
Most of the processing is done at the other end of the application, that is, at the server. Here again there can be separation between the application and the data storage in the database. These two layers can be on separate machines that are themselves separated over the network.
The layering approach can bring about much greater flexibility and simplicity in design, maintenance and usage. In a monolithic application, everything is coupled together and in a layered application, we can change one layer without affecting the behavior of others. For example, the business logic at the application layer can be changed without affecting the user interface or the database design. We can change the database management system from one vendor’s offering to another without changing the application code or the user interface.

There can be many kinds of web applications such as –

  • ·         Those with static text
  • ·          Content that is changed frequently
  • ·         Interactive websites that act on user input
  • ·         Portals that are merely gateways to different kinds of websites
  • ·         Commercial sites that allow transactions



Share:

COMMON GRAPHICAL USER INTERFACES




This post presents a list of terms used commonly with the graphical user interface
(GUI): GUIs are systems that allow creation and manipulations of user interfaces
employing windows, menus, icons, dialog boxes-mouse and keyboard. Macintosh toolbox,
Microsoft Windows and X-Windows are some examples of GUIs.


Pointing Devices:

Pointing devices allow users to point at different parts of the screen. Pointing devices
can be used to invoke a command from a list of commands presented in a menu. They
can also be used to manipulate objects on the screen by:
* Selecting objects on the screen
* Moving objects around the screen, or
* Merging several objects into another object.

Since the 1960s, a diverse set of tools have been used as pointing devices including the
light pen, joystick, touch sensitive screen and a mouse. The popularity of the mouse is

due to the optimal coordination of hand and easier tracking of the cursor on the screen.


Pointer:

A symbol that appears on the display screen and that you move to select objects and

commands. Usually the pointer appears as a small angled arrow.

Bit-Mapped Displays:

As memory chips get denser and cheaper, bit displays are replacing character-based
display screens. Bit-mapped display made up of tiny dots (pixels) are independently
addressable and much finer resolution than character displays. Bit-mapped displays
have advantages over character displays. One of the major advantages is graphic
manipulation capabilities for vector and raster graphics, which presents information in

the final form on paper (also called WYSIWYG: What You See Is What You Get).


Windows:


When a screen is split into several independent regions, each one is called a window.
Several applications can display results simultaneously in different windows.The end-user can switch from one application to another or share data between
applications. Windowing systems have capabilities to display windows either tiled or
over-lapped, Users can organize the screen by resizing the window or
moving related windows closer.


Menus:

A menu displays a list of commands available within an application . From
this menu, the end-user can select operations such as File, Edit or Search. Instead of
remembering commands at each stage, a menu can be used to provide a list of items.
Each menu item can be either a word or an icon representing a command or a
function. A menu item can be invoked by moving the cursor on the menu item and
selecting the item by clicking the mouse.
Instead of memorizing commands to each stage, the user selects a command from a
menu bar displaying a list of available commands.


Dialog boxes:


Dialog boxes  allow more complex interaction between the user and the
computer. Dialog boxes employ a collection of control objects such as dials, buttons,
scroll bars and editable boxes.
In graphical user-interfaces, textual data is not only a form of interaction. Icons
represent concepts such as file folders, wastebaskets, and printers. Icons symbolize
words and concepts commonly applied in different situations.  Each one of these icons represents a certain type of painting behavior. Once the pencil icon is clicked, for example, the cursor can
behave as a pencil to draw lines. Applications of icons to the user-interface design are
still being explored in new computer systems and software such as the NeXT

computer user interface.
Dialog boxes are primarily used to collect information from the user or to present
information to the user.
Among the information obtained are the number of copies and page numbers to be
printed. Dialog boxes are also used to indicate error message in the form of alert
boxes. Dialog boxes use a wide range of screen control elements to communicate with
the user.



Icons:

Icons are used to provide a symbolic representation of any systemluser-defined object
such as file, folder, address, book, applications and so on. Different types of objects
are represented by a specific type of icon. In some GUIs, documents representing
folders are represented by a folder icon  A folder icon contains a group of
files or other folder icons. Double clicking on the folder icon causes a window to be
opened displaying a list of icons and folder icons representing the folder's

contents.


Desktop Metaphor:

The idea of metaphors has brought the computer closer to the natural environment of
the end-user. The concept of physical metaphor paradigm, developed by Alan Kay,

initiated most of the research for graphic user interfaces based on a new programming approach called object-oriented programming. Discussion of this subject is beyond this
unit. The physical metaphor is a way of saying that the visual displays of a computer
system should present the images of real physical objects For example, the wastepaper basket icon can be used to discard objects from the
system by simply dragging the unwanted objects into the dustbin, as in real life. The
desktop metaphor probably has been the most famous paradigm. Because of the
large set of potential office users, this metaphor can have the most dramatic effect.
In this paradigm, the computer presents information and objects as they would
appear and behave in an office, using icons for folders, in-baskets, out-baskets and
calendars. In a desktop metaphor, users are not aware of applications. Users deal with files,
folders, drawers, a clipboard and an outbox. Instead of starting the word process and
loading file, users merely open the report document, which implicitly invokes the word
processor, Clicking the mouse on an icon representing the report causes the word
processor to get started and to load the report file implicitly. Today, several computing
environments provide this capability.


The 3D GUI:


The desktop metaphor GUI is 2% D. It is 2D because its visual elements are two dimensional:
they lie in the xy plane, are defined in 2D coordinates, are flat and contain
only planar regions (areas). It is 2% D because where visual elements overlap they
obscure each other according to their priority. In a 3D GUI the visual elements are
genuinely three-dimensional: they are situated in xyz space, are defined in terms of 3D
coordinates, need not be flat and may contain spatial regions (volumes).
The design considerations for a 3D GUI appear more complex than for a 2% D GUI.
To begin with, the issues of metaphor and elements arise afresh. The desktop
metaphor with its windows, icons, menus and pointing device elements is firmly
established for 2!4D GUIs. Tn contrast no clearly defined metaphor and set of
elements for 3D GUIs are manifest -yet. 3D GUIs offer considerably more scope
for metaphors than 2%D GL. Tls; there are many metaphors which could be based on
our physical 3D environment, including the obvious extension of the desktop metaphor
into a 3D environment, including the obvious extension of the desktop metaphor into a
3D office metaphor. On the other hand, much more abstract metaphors are possible,
such as one based "starmaps" where objects are simply placed somewhere in
"cyberspace". Likewise the elements of a 3D GUI may resemble, or differ
substantially from, the elements of the 2% D GUI.
The various prototypes have been developed to design the same elements in the
3D GUI as in the 2 'I2D desktop GUI: windows, icons, menus, a general space in
which to arrange the visual elements, a cursor and an input device to manipulate

the cursor.



Share:

EVOLUTION OF HUMAN AND MACHINE INTERACTION



The primary means of communication with computers earlier had been through commandbased
interfaces. In command interfaces, users have to learn a large set of commands to
get theirjob(s) done. In earlier computer systems paper tapes, cards and batch jobs were
the primary means of communicating these commands to the computers. Later, timesharing
systems allowed the use of CRT terminals to interact/communicate with the
computer. These early systems were heavily burdened by users trying to share precious
computer resources such as CPU and peripherals.

The batch systems and time-sharing led to command-driven user interfaces. Users had
to memorize commands and options or consult a large set of user manuals. The early
mainframe and minicomputer systems required a large set of instruction manuals on
how to use the system. In some systems, meaningful terms were used for command
names to help the end-user. But in other systems the end-user had to memorize several
sequences of keystrokes to accomplish certain tasks.

Early users of computes were engineers and what we now call expert users; user
who had a lot of interest in knowing more about computer systems and the technology.
Command line interfaces were acceptable to the majority of these users. In the 1970s,
computers were introduced to a new class of users: secretaries, managers and nontechnical
people. These new users were less interested in learning computer
technology and more interested in getting their jobs done through the machine. The
command-based interfaces caused many of these new users to develop computer
phobia. Imagine the thought of memorizing commands made up of "Control-Alt-Del" to
boot the system.


To make life easier for the end -user, a large collection of devices have been
invented to control, monitor and display information. The early (and still widely used)
peripherals are the keyboard and the video terminal. But, it was not until the late
70s, that research projects at some universities led to the invention of pointing
devices and windowing systems. The mouse and joystick were among some of the
few pointing devices that were invented in this period. Also, research pioneers
invented the notion of splitting the screen to allow multiple windows and direct
manipulation of objects.

In the 70s, researchers designed powerful new workstations armed with graphical
user-interfaces. The basic assumption of these new workstations was that one user
could have a powerful desktop computer totally dedicated to that user's task. Thus,
the computer is not only used to perform the task, but can also provide a much more
intuitive and easy-to-use environment. In this unit we will examine the common
GUIs.
Share:

Waterfall Model




It consists of a linear set of distinct phases including requirement analysis,
specification, design, coding, testing and implementation.

Verification is defined as the question “Are we building the product right?” Validation
is defined as the question “Are we building the right product?”

Features of the waterfall model:

*  Systematic and linear approach towards software development.
*  Each phase is distinct.
*  Design and implementation phase only after analysis is over.
*  Proper feedback, to minimize the rework.

Drawbacks of the waterfall model:

*  Difficult for the customer to state all the requirements in advance.
*  Difficult to estimate the resources, with limited information.
*  Actual feedback is always after the system is delivered. Thus, it is expensive to make changes             during the later stages of software development.

*  Changes are not anticipated.



                                                    THE WATERFALL MODEL



Share:

SYSTEM DESIGNING



It is the process which starts after the completion of Analysis Phase. In this phase the
planning of activity starts whereas in the Analysis phase the important data are
gathered.

System Designing consists of various activities. The main activities that are being
conducted here:

* Modeling Data Flow Diagram of System
* Creating Entity Relationship Diagram of System
* Creation of Database Dictionary Designing
* Database Design
* Input Form Design
* Output Forms or Reports Design.

Finally, after the completion of this phase, the development phase is performed where
these designs are used to achieve the proper look and feel of the software.


Data Flow Diagram for System

It is one of the important tools of System Designing which enables software engineer
to develop models of the information domain and functional domain at the same time.
It serves two purposes:

1. Provides an indication of how data are transformed as they move through the system
2. Depicts the Functions (and sub-functions) that transform the data flow.

The DFD provides additional information that is used during the analysis of the
information domain and serves as a basis for the modelling of function. A description
of each function presented in the DFD is contained in a Process Specification. As the
DFD is refined into greater levels of details, the analyst performs an implicit
functional decomposition of the system.
One of the faults in our process while developing the system was that the system was
analysed, designed and implemented but the DFDs were developed as an afterthought.
Thus, you will find the following problems in the DFDs:

* The dataflow have not been labeled in many cases.
* The processes are shown as linear processes especially in the second level DFDs
* The verification activities are being performed in sequence, although they are not practically done        that way in the actual situation.

Exercise Demonstration:

You must discuss the DFDs thoroughly in your sessions and come up with the most optimum DFDs.












Share:

Ashutosh Says...


"Hello My dear visitors, this blog is developed to give you more and more programming and software development stuffs. So, take a cup of coffee and come back to me, let us move together to an information age. 'All the Best!!!'"


Featuring

EVOLUTION OF OO METHODOLOGY

The earliest computers were programmed in machine language using 0 and 1. The mechanical switches were used to load programs. Then, to...