Thursday, 20 June 2013

Characteristics Of A Good Programming Language?

There are some popular high-level programming languages,while there are others that could not become so popular in-spite of being very powerful.There might be reasons for the success of a language but one obvious reason is its characteristics.Several characteristics believed to be important for making it good:

A good programming language must be simple and easy to learn and use.It should provide a programmer with a clear,simple and unified set of concepts that can be grasped easily.The overall simplicity of a this strongly affects the readability of the programs written in that language and programs that are easier to read and understand are easier to maintain.It is also easy to develop and implement a compiler or an interpreter for a simple language.However,the power needed for the language should not be sacrificed for simplicity.For Example,BASIC is liked by many programmers because of its simplicity.

1-Naturalness:

A good language should be natural for the application area for which it is designed.That is,it should provide appropriate operators,data structures,control structures and a natural syntax to facilitate programmers to code their problems easily and efficiently.FORTRAN and COBOL are good examples of languages possessing high degree of naturalness in scientific and business application areas,respectively.

2-Abstraction:

Abstraction means ability to define and then use complicated structures or operations in ways that allow many of the details to be ignored.The degree of abstraction allowed by a language directly affects its ease of programming.For Example,object-oriented languages support high degree of abstraction.Hence,writing programs in object-oriented languages is much easier.Object-oriented also support re usability of program segments due to this feature.

3-Efficiency:
Programs written in a good language are translated into machine code efficiently,are executed and require relatively less space in memory.That is,a good programming language is supported with a good language translator (a compiler or an interpreter) that gives due consideration to space and time efficiency.

4-Structured Programming Support:

A good language should have necessary features to allow programmers to write their programs based on the concepts of structured programming.This property greatly affects the ease with which a program may be written.,tested and maintained.More over,it forces a programmer to look at a problem in a logical way so that fewer errors are created while writing a program for the problem.

5-Compactness:

In a good language,programmers should be able to express the intended operations concisely without losing readability.Programmers generally do not like a verbose language because they need to write too much.Many programmers dislike COBOL,because it is verbose in nature (Lacks Compactness)

6-Locality:

A good language should be such that while writing a program,a programmer need not jump around the visually as the text of a program is prepared.This allows the programmer to concentrate almost solely on the part of the program around the statement currently being worked with.COBOL and to some extent C and Pascal lack locality because data definitions are separated from processing statements,perhaps by many pages of code,or have to appear before any processing statement in the function/procedure.

7-Extensibility:

A good language should also allow extensions through a simply,natural and elegant mechanism.Almost all languages provide subprogram definition mechanisms for the purpose,but some languages are weak in this aspect.

8-Suitability to its Environment:

Depending upon the type of application for which a programming language has been designed,the language must also be made suitable to its environment.For Example, a language designed for a real-time applications must be interactive in nature.On the other hand,languages used for data-processing jobs like payroll,stores accounting etc may be designed to operative in batch mode.

Thursday, 13 June 2013

Computer System

The Terms "hardware" and "software" are used frequently in connection with computer.Hard wares refer to the physical devices of a computer system.Thus,input,storage,processing,control and output devices are hardware.In fact,what we have learnt so far in earlier chapters is actually the hardware of computer systems.The term "software" will be introduced and will be discussed at length in the  next few articles.

What Is Software?

A computer can not do anything on its own,it must be instructed to do a job desired by us.Hence,it is necessary to specify a sequence of instructions written in a language understood by a computer is called Computer Program.A program controls a computer's processing activity and the computer performs precisely what the program want it to do.When a computer is using a program to perform a task,we say,it is running or executing the program.
The term software refers to a set of computer programs,procedures and associated documents (flowcharts,manuals etc) describing the programs and how they are to be used.
A Software Package is a group of programs that solve a specific problem or perform a specific type of a job.For example,a word-processing package contain programs for text editing,text formatting,drawing graphics and spelling checking etc.Hence,a multipurpose computer system like a personal computer in your home has several software packages on each for every type of job it can perform.
Relationship Between Hardware and Software:
For A Computer to produce useful output its hardware and software must work together.Nothing useful can be done with the hardware on its own and software cannot be utilized without supporting hardware.

To take an an analogy,a cassette player and its cassettes purchased from the market as hardware.Hence, the songs recorded on the cassettes are its software.To listen to a song,that song has to be recorded on one of the cassettes first,which is then mounted on the cassette player and played.Similarly,to get a job done by a computer,the corresponding software has to be loaded in the hardware first and then executed.
Following important points regarding the relationship between the hardware and software are brought out by this analogy:

1-Both hardware and software are necessary for a computer to do useful jobs.Both are complementary to each other.
2-Same hardware can be loaded with different software to make a computer perform different types of jobs just as different songs can by played using the same cassette player.
3-Except for upgrades (like increasing main memory and hard disk capacities,or adding speakers and moderns etc) hardware is normally a one-time expense whereas software is a continuing expense.Like we buy new cassettes for newly released songs or for songs whose cassettes,we do not have,we buy,new software to be run on the same hardware as and when need arises or funds become available.

Types Of Software:

Although the range of software available today is vast and varied most software can be divided into two major categories:

1-System Software
2-Application Software

System Software:
System Software is a set of one or more program designed to control the operation and extend the processing capability of a computer system.In general,a computer's system software performs one or more of the following functions:
1-Supports the development of other application software.
2-Supports execution of other application software.
3-Monitors effective use of various hardware resources such as CPU,memory,peripherals etc.
4-Communicates with and controls operations of peripheral devices such as printers,disk and tape etc.

Hence,system software makes the operation of a computer system more effective and efficient.It helps the hardware components work together,and provides support for the development and execution of application software (programs).The program included in a system software package are called system program.The programmers who prepare system software are referred to as System Programmers.
Some commonly known types of system software are:

1-Operating System:
Operating system takes care of effective and efficient utilization of all hardware and software components of a computer system.
2-Programming Language Translators:

Programming Language translators perform the instructions prepared by programmers in a programming language into a form that can inter-connected and executed by a computer system.
3-Communication Software:

In a network environment where multiple computers are interconnected together by communications network) communicates software enables transfer of data and programs from one computer system to another.
4-Utility Programs:

Utility Programs (also known as utilities)are set of programs that help users in system maintenance tasks, and in performing tasks of routine nature.Some tasks commonly performed by utility programs include formatting of hard disks or floppy disks,taking backup files stored on hard disk on to a tape or floppy disk,sorting of the records stored in a file based on some key fields.

Application Software:
Application Software is a set of one or more programs designed to solve a specific problem,or do a specific task.For Example,payroll processing software,examination results processing software,railway/airline reservation software,computer games software are all application software.Similarly,a program written by a scientist to solve a research a problem is also application software.The programs included in an application software package are called application programs.The programmers who prepare application software are referred to as Application Software.

These are literally millions of application software available for a wide range of applications.They range from simple applications such as word processing ,inventory management,preparation of tax returns,banking,hospital,administration.insurance,publishing,to complex scientific and engineering applications such as weather forecasting,space shuttle launching,oil and natural gas exploration,design of complex structures like air crafts,ships,bridges,sky-rise buildings etc.With so many applications available,it is not possible to categorize them all and to cover them here.Some commonly known application software are:

1-Word-Processing Software:
word-Processing software enables us to make us of a computer for creating,editing,viewing,formatting,storing,retrieving and printing documents (written material such as letters,reports and books etc).

2-Spreadsheet Software:
Spreadsheet software is numeric-data analysis tool that allows us to create a kind of computerized ledger.A manual ledger is a book having rows and columns that accountants use for keeping a record of financial transactions and for preparing financial statements.

3-Database Software:
A database is a collection of related data stored treated as a unit for information retrieval purposes.A database software is a set of programs that enable us to create a database,maintain it (add,delete and update its records) organize its data in desired fashion (for example,sort it records alphabetically name-wise) and selectively useful information from it.For example,queries such as get the telephone number of the person named Kashyap Rana from the address database,or get the names of all currently enrolled students whose birthdays fall today from the student database can be handled easily.

4-Graphic Software:
Graphic Software enables us to use a computer system for creating,editing,viewing,storing,retrieving and printing of designs,drawings,graphs etc.

5-Personal Assistance Software:
Personal assistance software allows us to use personal computer for storage and retrieval of our personal information as well as planning and management of our schedules,contacts,finances and inventory of important items.

6-Education Software:
Education software allows a computer to be used as teaching and learning tool.A few examples of such software include those used for teaching mathematics,grammar,language or any other subject.

7-Entertainment Software:
Entertainment Software allows a computer to be used as an entertainment tool.Computer video games belong to this category of software.

Logical System Architecture:
It depicts the relationship among hardware system software,application software and users of computer system.At the centre is hardware comprising of the physical devices/components of the computer system.Surrounding the hardware is system software later that constitutes the operating and programming environment of the computer system.It hides the software details of the system from application programmers and coordinates the operations of various hardware devices for optimizing the performance of all devices.Surrounding the system software is application software layer consisting of a wide range of application software.Finally,the users later consists of user interfaces provided by the application software.Different application software usually provide different user interface.Hence,how a user interacts with the computer system depends on which application he/she is using.

Acquiring Software:
Earlier,application and system software were included in the purchase price of a computer.Today,software is usually not included in the purchase price of a computer.For most Computer manufactures,the purchase price of a computer includes only its hardware and a minimum of system software.A customer normally has to pay extra charges for additional system software and application software that he/she may wish to purchase.
Desired software is obtained today in one or more of the ways discussed below.The relative advantages and limitation of each way of obtaining software are also discussed.

Buying Pre-written Software:
Thousands of pre-written software packages are available today.If you can find a software package that meets your requirements,purchasing it is the best option.Following steps are followed in selecting and buying a pre-written software package.

1-Prepare a list of all available software packages that can perform the desired tasks.
2-From the list,select those software packages only that meet the system specifications.For example,compatibility with the user's available/planned hardware,I/O devices,Operating System etc.
3-Now choose the best one (based on factors such as supported features,duration of warranty support,cost etc) from the list of those selected in step 2.
4-Now find out the source from where you can purchase the finally selected software at the cheapest price.Different vendors normally offer different discount rates on the list price and selecting the best vendor in terms of price and after-sale support is very important.
Following are advantages and limitations of buying a pre-written software package:
1-Pre-written software packages usually cost less because many customers share their development and maintenance cost.

2-With a pre-written software package,a user can start the planned activity almost immediately.The user need not wait for the software to be developed and tested.This may be very important,if the development and testing efforts involve several months.
3-Pre-written software packages are usually general purpose,so that they can meet the requirements of as many potential users as possible.Due to this,many times,the operating efficiency and capability to meet the specific needs of user more effectively is not as good of pre-written software package as for in-house developed software package.

Ordering Customized Software:
If none of available pre-written software packages meet the specific requirements of a user (an organization or an individual),it becomes necessary for the user to create a customized software package.If the user has an in-house software development team,the software package can be created in-house.However,if such a team doesn't exist in-house,the user must get it created by another organization by placing an order for it.Following steps are followed for this:

1-The user prepare a list of all requirements carefully.
2-The user team floats a tender for inviting quotations for creation of the requisite software.Sometimes,the user directly contacts few software houses instead of floating a tender for quotations.
3-After receiving the quotations,the user selects a few of them for further interaction based on the cost quoted by them,their reputation in the market,their submitted proposal etc.
4-The user then personally interacts with the representative of each of the selected vendors.Based on this interaction,the user makes a final choice of the vendor to offer the contract for creation of the requisite software.
5-The selected vendor then creates the software package and delivers it to the user.Often,the vendor has to interact closely with the user during the software development process.

Often,the user has to order for both hardware and software.In this case,the user may choose to place the order for both to a single vendor.The vendor develops the software on the chosen hardware and delivers the software along with the hardware to the user.The is normally to as an end to end solution or a Turkey Solution.
Following are the advantages and limitations of ordering a customized software package rather than developing it in house:

1-The user need not maintain its own software development team.Maintaining and managing such a team is expensive and may not be justified for an organization not needing to develop software on a regular basis.
2-It is easier to carry out changes in the software,if an in-house team develops it.For ordered customized software,the user needs to depend always on the vendor for carrying out the changes, and the vendor may charge separately for every request for change.

Developing Customized Software:
If none of the available pre written software packages meets the specific requirements of an organization, and if the organization has an in-house software development team,the organization may choose to develop a customized software package in-house for its requirements.Following steps are followed for in-house development of a software package:

1-The organization constitutes a project team first to carry out the development activity.
2-The team studies the requirements carefully and plans the functional modules for the software.
3-It then analyzes which of functional modules need to be developed and which of the functional modules requirements are met with existing pre written software.
4-For the functional modules that need to be developed,the team next plans their programs and oes the coding,testing,debugging and documentation for the planned programs.
5-The team then tests all the modules in an integrated manner.
6-The software is then implemented used and maintained.
Following are the advantages and limitation of a developing a customized software package in-house rather than getting it developed by an outside party:
1-It is easier to carry out changes in the software,if it is developed in-house.
2-Developing software in-house means a major commitment of time,money and resources because an in-house software development team needs to be maintained and managed.

Downloading Public-domain Software:
Public-domain software is available free or for normal charge from the bulletin boards or user-group libraries on the Internet.Basic objective of creators of such software is to popularize their software to as many users as possible.Users are encouraged to copy such software and try them out.The software can usually be shared freely with other users.Hence,public-domain software is also referred to as shareware/freeware.They are also known as commonly supported software as mostly the authors do not support the product directly and users of the software support and help each other,while some shareware software remain full-features and free perpetually,some other have to be bought to use all features (such partial packages are referred to as Life/Free/Personal Edition) or after some use (referred to as trial period).

Other type of public-domain software becoming popular among users and fellow programmers equally are those that come with their source code as well.Such software is referred to as Open Source Software (OSS).They usually allow a user to download,view,modify and distribute modified source code to others.Such software and their source code are covered usually under a licensing system that promotes open source movement and protects the copyright of original and subsequent authors.General Public License (GPL) of GNU organization and BSD license from Berkeley Software Division of University of California are some of the most popular licensing systems is place and use word over.

One fact to be bome in mind is that all open source software are not necessarily free and vise-verse.
Often a user may find public-domain software suitable for his/her requirements.In this case,he/she can obtain it by downloading it from the Internet.
Following are the advantafes and limitations of downloading and using public-domain software packages:

1-They are usually free and accompanied with their source code.
2-They can be downloaded and used immediately.The user need not wait for the software to be developed and tested before the planned activity can be started.
3-They may not be tested before release,and their support is normally poor as compared to commercial software.Hence,they may fail during operation and the bug not be fixed soon.

Software Development Steps:
All software needs to be developed by someone.Developing software and putting it to use is complex process involving following steps:
1-Analyzing the problem at hand,and planning the programs to solve the problem.
2-Coding the program.
3-Testing,debugging and documenting the programs.
4-Implementing the programs.
5-Evaluating the maintaining the programs.

Firmware:
Firmware refers to a sequence of instructions substituted for hardware.For example,when cost is more important than performance,a computer system architect might decide not to use special electronic circuits to multiply two numbers,but instead write instructions to cause the machine to accomplish the same function by repeated use of circuits designed already to perform addition.This software is stored in a read-only memory (ROM) chip of the computer and is executed whenever the computer has to multiply hardware and stored in read-only memory.

Initially,only system software was supplied in form of software.However,today even application programs are supplied in firmware.Dedicated applications are also programmed in this fashion and made available in firmware.Because of the rapid improvement in memory technology,firmware is frequently a cost-effective alternative to wired electronic circuits,and its use in computer design has been gradually increasing.In fact,the increase is of firmware has today made it possible to produce smart machines of all types.These machines have microprocessor chips with embedded software.

Middle-ware:
In early days of computer network,the two-tier,client-server system architecture was used to implement distributed applications.In this two-tier system architecture,several client computers were connected to and serviced by one or more server computers for meeting the objectives of the applications.This architecture was inflexible and had its own limitations due to following reasons:

1-Management and support devices,such as directories for locating resources,authentication for handling security,exception management for handling failures,and co-scheduling of networked resources for better performance,had to be programmed within the client and server parts of the application software.
2-Client and server had to be properly synchronized for exchange of messages between them (they used synchronous mode of communication rather than asynchronous mode,which is more flexible).
3-Business application logic had to be programmed inside the user interface on the client,or within the application software on the server,on both.
As a result,the client and server systems were closely knit together,making independent development of interoperable client and server software increasingly difficult.Hence,rapid development and deployment,ease of management,and portability of distributed applications were difficult.

The three-tier system architecture was invented to overcome these problems.The basic idea here was to have a separate software layer that acts as "glue" between the client and server parts of an application and provides a programming abstraction as well ass masks the heterogeneity of underlying networks,hardware and operating systems from application programmers.This software layer is known as middle ware because it sits in the middle,between the operating system and applications.In general, middle-ware is defined as a set of tools and data that helps applications use networked resources and services.

Tuesday, 11 June 2013

What Are The Types Of Computer Processors?

Cisc Processors:

One of the earlier goals of CPU designers was to provide more and more instructions in the instruction set of a CPU to ensure that the CPU supports functions directly.This makes it easier to translate high-level languages programs to machine language and ensures that the machine language programs run more effectively.Of Course,every additional instruction in the instruction set of a CPU requires the necessary hardware circuitry to handle that instruction,adding more complexity to the CPU's hardware circuitry.Another goal of CPU designers was to optimize the usage of expensive memory.To achieve this,the designers tried to pack more instructions in memory by introducing the concept of variable length instructions such as half-word,one-and-half-word etc.For Example,an operand in an immediate instruction needs fewer bits and can be designed as a half-word instruction.

Additionally,CPU's were designed to support a variety of add of addressing modes (discussed later in the chapter during the discussion of memory).CPUs with large instruction set,variable length instructions, and a variety of addressing modes are said to employ CISC (Complex Instruction Set Computer) architecture.Since CISC processors possess so many processing features,they make the job of machine language; programmers easier.However,they are complex and expensive to produce.Most personal computers of today use CISC processors.

RISC Processors:

In early 1980,some CPU designers realised that many instructions supported by a CISC-based CPU are rarely used.Hence,an idea evolved that the design complexity of a CPU can be reduced greatly by implementing only bare minimum basic set of instructions and some some of the more frequently used instructions in the hardware circuitry of the CPU.Other complex instructions need not to be supported in the instruction set of the CPU because they can always be implemented in software by using the basic set of instructions.While working on simple CPU design,the designers also came up with the idea of making all instructions of uniform length so that the decoding and execution of all instructions becomes simple and fast.

                                                Furthermore,to speed up computation and reduce the complexity of handling a number of addressing modes they decided to design all the instructions in such a way that they retrieve operands stored in registers in CPU rather that from memory.These designs ideas resulted in producing faster and less expensive processors.CPU's with small business sex,fixed-length instructions and reduced references to memory to retrieve operands are said to employ RISC (Reduced Instruction Set Computer) architecture.RISC processors have small instruction set,they place extra demand on programmers who must consider how to implement complex instructions by combining simple instructions.Processors are faster for most applications,less complex and less expensive to produce than CIS processors because of simpler design.


EPIC Processors:

The Explicitly Parallel Instruction Computing (EPIC) technology breaks through the sequential nature of conventional processor architectures by allowing the software to communicate explicitly to the processor when operations can be done in parallel.For This,it uses tighter coupling between the compiler and the processor.It enables the compiler to extract maximum parallelism in the original code and explicitly describe it to the processor.Processors based on EPIC architecture are simpler and more powerful than traditional CISC or RISC processors.These processors are mainly targeted to next-generation,64-bit and high-end server and workstation market (not for personal computer market).

Multicore Processors:

Till recently,the approach used for building faster processors was to keep reducing the size of chips while increasing the number of transistors they contain.Although,this trend has driven the computing industry for several years,it has now been realised that transistors cannot shrink forever.Current Transistors technology limits the ability to continue making single-core processors more powerful due to following reasons:

1-As a transistor get smaller,the gate,which switches the electricity on or Off,gets thinner and less able to block the flow of electrons.Thus,small transistors tend to use electricity all the time,even when they are not switching.This wastes power.
2-Increasing clock speeds causes transistors to switch faster and generate more heat and consume more power.

These and other challenges have forced processors manufactures to research for new approach for building faster processors.In response,manufactures are not building multicore processor chips instead of increasingly powerful (faster) single-core processor chips.That is,in the new architecture,a processor chip has multiple cooler-running,more energy-efficient processing cores instead of one increasingly powerful one.The multicore chips do not necessarily run as fast as the highest performing single-core models,but they improve overall performance by handling more work in parallel.For Instance,a dual-core chip running multiple applications is about 1.5 times faster than a chip with just on comparable cores.

The Operating system (OS) controls the overall assignment of tasks in a multicore processor.In a multicore processor each core has its independent cache (though in some designs All cores share the same cache),thus providing the PS with sufficient resources to handle multiple applications in parallel.When a single-core chip runs multiple programs,the OS assigns a time slice to work on one program and then assigns different time slices for other programs.This can cause conflicts errors,or slowdowns when the processor must perform multiple tasks simultaneously.However,multiple programs can be run at the same time on a multicore chip with each core handling a separate program.The same logic holds for running multiple threads of a multi-threaded application at the same time on multicore chip with each core handling at separate thread.Based on this,either the OS or a multithread application parcels out work to the multiple cores.

Multicore processors have following advantages over single-core processors:

1-They enable building of computers with better overall system performance by handling more work in parallel.
2-For comparable performance,multicore chips consume less power and generate less heat that single-core chips.Hence,multicore technology is also referred to as energy-efficient or power aware processor technology.
3-Because the chips core are on the same die in case of multicore processors architecture,they can share architectural components,such as memory elements and memory managment.They thus have fewer components and lower costs than systems running multiple (each a single-core processor).
4-Also,the signalling between cores can be faster and use less electricity than on multichip systems.

Multicore Processors,however,currently have following limitations:

1-To take advantage of multicore chips,applications must be redesigned so that the processor can run them as multiple threads.Note that it is more challenging to create software that is multi-threaded.
2-To redesign applications,programmers must find good places to break up the applications,divide the work into roughly equal pieces that can run at the same time and determine the best times for the threads to communicate with one another.All these add to extra work for programmers.
3-Software's vendors often charge customers for each processor that will run the software (one software license processor).A customer running an application on an 8-Processor machine (multiprocessor computer) with single core-processors would thus pay for 8 licenses.A key issue with multicore chips is whether software vendors have different views regarding this issue.Some consider a processor as a unit that plugs into a single socket on motherboard,regardless of whether it has one or more cores.Hence,a single software license is sufficient for a multicore chip.On the other hand,others charge more to use their software on multicore chips for per-processing licensing.They are of opinion that customers get added performance benefit by running the software on a chip with multiple cores,so they should pay more.Multi-core chip makers are concerned that this type of non-uniform policy will hurt their products sales.

Chip makers like Intel,AMD,IBM and SUN have already introduced multicore chips for servers,desktops and laptops.The current multicore chips are dual-core (2 cores per chip),quad-core ( 4 cores per chip) , 8 cores per chip and 16 cores per chip.Industry experts predict that multicore processors will be useful immediately in server class machines but won't be very useful on the desktop systems until software vendors develop considerable more multi-threaded software.Until this occurs,single-core chips will continue to be used.Also,since single-core chips are inexpensive to manufacture,they will continue to be popular for low-priced Pcs for a while.

Thursday, 6 June 2013

Basic Computer Organization

Even tough the size,shape,performance,reliability and cost of computers have been changing over the years the basic logical structure (based on the stored program concept) by Von Neumann,has not changed.No matter what shape and size of computer we are talking about,all computer systems perform the following five basic operations,for converting raw input data into information which is useful to their users.

1-Inputting:

The process of entering data and instructions into computer system.

2-Storing:

Saving data and instructions to make them readily available for initial or additional processing as and when required.

3-Processing:

Performing arithmetic operations (add,subtract,multiply,divide etc) or logical operations (comparisons like equal to,less Than,greater than etc) on data to convert them into useful information.

4-Outputting:

The process of producing useful information or results for the user,such as a printed report or visual display.

5-Controlling:

Directing the manner and sequence in which all of the above operations are performed.



The goal of this article is to to familiarise you with the computer system units,which perform these functions.It will provide you with an overview of computer systems,as they are viewed by computer systems architects.

The internal architecture of computers differs from one system model to another.However,the basic organisation remains the same for all computer systems.A block diagram of the basic computer organisation is shown in below figure.In this figure,the solid lines indicate the flow of instruction and data and dotted lines represent the control exercised by the control unit.It displays the five major building blocks ((functional units) of a digital computer system.These five units correspond to the fie basic operations performed by all computer systems.The function of each of these units are described below:


Basic Computer Organization
Basic Computer Organization

1-Input Unit:

Data and instructions must enter the computer system,before any computation can be performed on the supplied data.The task is performed by the input unit,which links the external environment with the computer system.Data and Instructions enter input units in forms,which depend upon the particular device used.For Example data are entered from a keyboard in a manner similar to typing and this differs from the way in which  data are entered through a scanner,which is another type of  input device.However,regardless of the form in which they receive their inputs,all input devices must transform the input data into the binary codes which the primary memory of a computer is designed to accept.This transformation is accomplished by units called input interface.Input devices are designed to match the unique physical or electrical characteristics of input devices, to the requirements of the computer system.

In short,the following functions are performed by an input unit:

1-It accepts (or reads) the instructions and data from the outside world.
2-It converts these instructions and data in computer acceptable form.
3-It supplies the converted instructions and data to the computer system for further processing.


Output Unit:

The job of an output is just the reverse of that of an input unit.It supplies the information obtained from data processing to the outside world.Hence,it links the computer with the external computer.As computer work with binary code,the results produced are also in the binary form.Hence,before supplying the results to the outside world they must be converted to human acceptable (readable) form.This task accomplished by units called output Interface.Output Interfaces are designed to match the unique physical or electrical characteristics of output devices (terminals,printers etc) to the requirements of the external environment.

In short,the following functions are performed by an output unit:

1-It accepts the results produced by the computer which are in coded form and hence cannot be easily understood by us.
2-It converts these coded results to human acceptable (readable) form.
3-It supplies the converted results to the outside world.

Storage Unit:

The Data and Instructions,which are entered into the computer system through input units,have to be stored inside the computer before the actual processing starts.Similarly,the results produced by the computer after processing,must also be kept somewhere inside the computer system before being passed on to the output units.Moreover,the intermediate results produced by the computer must also be preserved of ongoing processing.The storage computer unit of a computer system is designed to cater to all these needs.It provides space for storing data and instructions ,space for intermediate results and space for the final results.

In short,the functions of the storage unit are to hold (store):

1-The Data and Instructions required for processing.(received from input devices)
2-Intermediate results of processing
3-Final results of processing before these results are released to an output device.

The storage unit of all computers  is comprised of the following two types of storage:

1-Primary Storage:

The primary storage also known as main memory is used to hold pieces of program instructions and data,intermediate of processing and recently produced results of processing of the job(s),which computer system is currently working on.These pieces of information are represented electronically  in the main memory chips circuitry,and while it remains in the main memory,the central processing unit can access it directly at a very fast speed.However,the primary storage can hold information only while the computer system is on.As soon as the computer system is switched off or reset,the information held in the primary storage disappears.Moreover,the primary storage normally has limited storage capacity,because it is very expensive.The primary storage of modern computer systems is made up of semiconductor devices.

2-Secondary Storage:

The secondary storage also known as auxiliary storage, is used to take care of the limitations of the primary storage.That is,it is used supplement the limited  storage capacity and the volatile characteristics of primary storage.This is because secondary storage is much cheaper than primary storage and it can retain information even when the computer system is switched off or reset.The secondary Storage is normally used to hold the program instructions,data and information of these jobs on which the computer system is not working on currently,but needs to hold them for processing later.The most commonly used secondary storage is the magnetic disk.


Arithmetic Logic Unit:

The arithmetic logic unit (ALU) of a computer system is the place,where the actual execution of the instructions takes place during the processing operation.To be more precise,calculations are performed and all the comparison (decisions) are made in the ALU.The data and instructions,stored in the primary storage before processing are transferred as and when needed to the ALU are temporarily transferred back to the primary storage until needed later.Hence,data may move from primary storage to ALU and back again to storage,many times before the processing is over.

The type and number of arithmetic and logic operations,which a computer can perform,is determined by the engineering design of the ALU.However,almost all ALUs are designed to perform the four basic arithmetic operations (add,subtract,multiply and divide) and logic operations or comparisons,such as less than equal to and greater than.

Control Unit:

How does the input device know that it is time for it to feed data into the storage unit?How does the ALU know what should be done with the data and not the intermediate results?All this is possible due to the control unit of the computer system.Although,it does not perform any actual processing on the data,the control unit acts as a central nervous system for the other component of the computer system.It manages and coordinates the entire computer system.It obtains instructions from the program stored in main memory interprets the information's and issues signal which cause other unit of the system to execute them.


Central Processing Unit:

The control unit and the arithmetic logic unit of a computer system are jointly known as the  Central Processing Unit (CPU).The CPU is the brain of a computer system.In Human Body,all major decisions are taken by the brain and the other parts of the body function as directed by the brain.Similarly,in a computer system,all major calculations and comparison are made inside the CPU,and the CPU is responsible for activating and controlling the operations of other units of the computer system.

The System Concept:

You might have observed that we have been referring to a computer as a system (Computer System).What can be the reason behind this?To know the answer let us first understand the definition of a system

A system is a group of integrated parts,which have the common purpose of achieving some objectives.Hence,the following three characteristics are key to a system:

1-A system has more than one element.
2-All the elements of a system are logically related.
3-All the elements of a system are controlled in a manner to achieve the system goal.

Since a computer is made up of integrated components (input,output,storage and CPU).which work together to perform the steps called for in the program being executed,it is a system.The input or output units cannot functions,until they receive signals from the CPU.Similarly,the Storage Unit of the CPU alone is of no use.Hence,the usefulness of each unit depends on other units, and can be realised only when all units are put together (integrated) to form a system.

Points To Remember:

1-All Computer System perform the following five basic operations for converting raw input data into useful information-inputting,storing,processing,outputting and controlling.

2-The main components of a computer system are shown in figure above.

3-The input unit allows and instructions to be fed to he computer system from the outside world,in computer acceptable form.

4-The Input Interface transform,the input data and instructions fed to the computer,through its input devices,into the binary codes,which are acceptable to the computer.

5-The Output Unit allows the computer system to supply the information,obtained  from data processing to the outside world, in human acceptable (reliable) form.

6-The Output Interfaces transform the information obtained from data processing,from binary form to human acceptable (readable) form.

7-The Storage Unit of a computer system, holds the data and instructions to be processed and the intermediate and final results of processing.The two types of storage are - primary and secondary storage is slower in operation larger in capacity,cheaper in price and can retain information even when the computer system is switched off or reset.

8-During Data processing,the actual execution of the instructions,takes place in the Arithmetic Logic Unit (ALU) of a computer system.

9-The Control Unit of a computer system manages and coordinates the operation of all the other components of the computer system.

10-The control unit and the arithmetic logic unit of a computer system are jointly known as the Central Processing Unit (CPU),which serves as the brain of the computer system,and is responsible for controlling the operations of all other units of the system.

11-A Computer is often referred to as a computer system because it is made up of integrated components (Input,Output,Storage and CPU) which work together to perform the steps called for in the program being executed.


Tuesday, 4 June 2013

The Computer Generations

Generation in a computer is a step in technology.It provides a framework for the growth of the computer industry.Originally, the term "generation" was used to distinguish between varying hardware technologies.However,nowadays it has been extended to include both hardware and software,which together make up an entire computer system.
The custom of referring to the computer era in terms of generations into wide use only after 1964.There are totally five computer generations known until today.Each generation has been discussed below in detail along with its identifying characteristics.Although there is a certain amount of overlapping between generations the approximate dates shown against each normally accepted.

During the description of the various computer generations,you will come across several terminologies and computer jargon's,which you might not be aware of,and may not be able to understand properly.However,the idea here is to just give you an overview and technologies will be described an detail in subsequent chapters.Moreover,remember that the objective of this book is also the same-to introduce you to the various concepts about computers.Hence,you will have a better understanding of the terminologies introduced in this section only after you have completed reading this entire book.The objective of this section is mainly to provide an overview of what all you are going to learn in this entire book.

First Generation (1942-1955):

Generations Of Computer
Generations Of Computer
We have already discussed about some of the early computer-ENIAC,EDVAC,EDSAC,UNIVAC 1 and IBM 701.These machines and others of their time were built by using thousands of vacuum tubes.A vacuum tube was a fragile glass device,which used filaments as a source of electronics and could control and amplify electronic signals.It was the only high-speed electronic switching device available in those days.These vacuum tube computers could perform computations in milliseconds and were referred to as first generation computers.


The memory of these computers was constructed using electromagnetic,relays and all data and instructions were fed into the system from punched cards.The instructions were written in machine and assembly languages because high-level programming languages were introduced much later.Because machine and assembly languages are very difficult to work with,only a few specialists understood how to program these early computers.
The characteristic features of first-generation computer are as follows:
1-They were fastest calculating devices of their time.
2-They were too bulky in size,requiring large rooms for installation.
3-Thousands of vacuum tubes,which were used emitted large amount of head and burnt out frequently.Hence,the room/areas in which these computers were located had to be properly air-conditioned.
4-Each vacuum tube consumed about half a watt of power.Since a computer typically used more than ten thousand vacuum tubes,the power of these computers was very high.
5-As vacuum tubes used filaments,they had a limited life.Since thousands of vacuum tubes were used in making one computer,these computers were prone to frequent hardware failures.
6-Due to low mean time between failures,these computers required almost constant maintenance.
7-In these computers,thousands of individual components had to be assembled manually by hand into functioning circuits.Hence,commercial production of these computers was difficult and costly.
8-Since these computers were difficult to program and use they had limited commercial use.


Second Generation (1955-1964):


1-A new electronic switching device called transistor was invented at Bell Laboratories in 1947 by John Bardeen,William Shockley and Walter Brattain.Transistors proved to be a better electronic switching device than the vacuum tubes due to their following properties:

1-They were more rugged and easier to handle than tubes since they were made of germanium semiconductor material rather than a glass.
2-They were highly reliable as compared to tubes since they had no part like filament,which could burn out.
3-They could switch much faster (almost ten times faster) tubes.Hence,switching circuits made of transistors could operate much faster than the computer parts made of tubes.
4-They consumed almost one-tenth the power consumed by a tube.
5-They were much smaller than a tube.
6-They were less expensive to produce.
7-They dissipated much less heat as compared to vacuum tubes.
The second-generation computer were manufactured using transistors,instead of vacuum tubes.Due to the properties of transistors listed above the second generation computers were more powerful,more reliable, less expensive,smaller and cooler to operate than the first-generation computers.

The memory of the second-generation computers was composed of magnetic cores.Magnetic disk and magnetic tape were the main secondary storage media used in second-generation computers.Punched cards were still popular and widely used for preparing programs and data to be fed to these computers.

On the software font,the second generation saw the emergence of high-level programming languages and batch operating systems.High-level programming languages like FORTAN,COBOL,ALGOL and SNOBOL were developed during the second generation period which were much easier for people to understand and work with than assembly or machine languages.Hence the second generation computers were easier to program and use than first generation computers.The introduction of batch operating system allowed multiple jobs to be batched together and submitted at a time and automatic transition from one job to another,as soon as the former job finished.This concept in reducing human intervention while processing multiple jobs,resulting in faster processing enhanced throughout and easier operation of second generation computers.

The first generation computers were mainly used for scientific computations.However,in the second generation in increasing usage of computers was seen in business and industry for commercial data processing applications like payroll,inventory control,marketing and production planning.

The characteristic features of second generation computers are as follows:
1-They were more than ten times faster than the first generation computers.
2-They were much smaller than first generation computers requiring small space.
3-Although the heat dissipation was much less than first generation computers,the rooms/areas in which the second generation computers were had to be properly air-conditioned.
4-They consumed much less power than the first generation computers.
5-They were more reliable and less prone to hardware failures than the first generation computers.
6-They had faster and larger primary and secondary storage as compared to first generation computers.
7-They were much easier to program and use than the first generation computers.Hence,they had wider commercial use.
8-In these computers,thousands of individual transistors had to be assembled manually by hand into functioning circuits.Hence,commercial production of these computers was difficult and costly.

Third Generation:

In 1958,Jack St.Clair Kilby and Robert Noyce invented the first integrated circuit.Integrated circuits (called IC's) are circuits consisting of several electronic components like transistors,resistors and capacitors grown on a single chip of silicon eliminating wired interconnection between components.The IC technology was also known as "microelectronics" technology because it possible integrate larger number of circuit components into very small (less than 5mm square) surface of silicon,known as "chip".Initially the integrated circuits contained only about ten to twenty components.This technology was named small scale integration (SSI).Later,with the advancement in technology for manufacturing IC's,it became possible to integrate up to about hundred components on a single chip.This technology came to be known as medium scale integration (MSI).The third generation was characterised by computers built using integrated circuits.The earlier ones used SSI technology and the later ones used MSI technology.IC's were much smaller,less expensive to produce,more rugged and reliable,faster in operation,dissipated less heat and consumed much less power than circuits built by wiring electronic components manually.Hence,third generation computers were more powerful,more reliable,less expensive,smaller and cooler to operate than the second-generation computers.

Parallel advancements in storage technologies allowed to construction of large magnetic cores based random access memory,and large capacity magnetic disks and magnetic tapes.Hence,the third generation computers typically had few megabytes (less than 5 megabytes) of main memory and magnetic disks capable of storing few tens of megabytes of data per disk drive.

On the software front,the third generation saw the emergence of standardisation of high-level programming languages,time sharing operating systems unbending of software from hardware and the creations of an independent software industry FORTAN COBOL,which were the most popular high-level programming languages in these days,were standardised by the American National Standars Institure (ANSI) in 1966 and 1968 respectively.They were referred to as ANSI,FORTAN and ANSI COBOL.The idea was that as long as computer with an ANSI,FORTAN or ANSI COBOL compiler.Additionally,some more high-level programming languages were introduced during the third-generation period.Notable among these PLAI,PASCAL and Basic.

We saw that in second-generation computer batch operating system was used.In these systems,users had to prepare their data and programs ad then submit them to the computer centre for processing.The operator at the computer centre collected these user jobs and fed them to the computer in batches at scheduled intervals.The output produced for each job was then sent to the computer centre counter for bring returned to some users especially programmers,because often they had to wait for days to locate and correct a few program errors.To remedy this situation,John Kemeny and Thomas Kurtz of Dartmouth College introduced the concept of time sharing operating system.

Time sharing operating system simultaneously allows a large number of users to directly access and share the computing resources in a manner that each user gets the illusion that no one else is using the computer.This is accomplished by having a large number of independent,relatively low-speed terminals simultaneously connected to the main computer.The introduction of time sharing concept helped in drastically improving productivity of programmers and made on-line system feasible,resulting in new on-line application like airline reservation systems,interactive query systems etc.

Until 1965,computer manufactures sold their hardware along with all associated software,and did not charge separately for the software they provided to customers.For example buyer's received language translators for all the language,which could run on the computers they purchased.From the user's standpoint,all this software was free.However,the situation changed in 1969,when IBM and other computer manufactures began to price their hardware and software products separately.This unbundling of software from hardware gave users as opportunity to invest only in software of their need and value.For example,now buyer's could purchase only the language translators they need and not all the language translators on the computer they purchased.This led to the creation of many new software houses,and the beginning of an independent software industry.
The development and introduction of minicomputers,also took place during he third-generation period.The computers built until the early 1960's were mainframe
systems,which only very large companies could afford to purchase and use.

Clearly,a need existed for low-cost smaller computers to fill the gaps left by the bigger,faster and costlier mainframe systems.Several innovators recognised this need and framed new firms in 1960's to produce smaller computers.The first commercially available minicomputer,the PDP-8 (Programmed Data Processor) was introduced in 1965 by Digital Equipment Corporation (DEC).It could easily fit in the corner of a room and did not require the attention of a full-time computer operator.It was based on time sharing operating system and could be accessed simultaneously by a number of users from different locations in the same building.It cost was about one-fourth of the cost of a traditional mainframe system,making it possible for smaller companies to afford computers.It confirmed the tremendous demand for small computers for business and scientific applications and by 1971,there were more than 25 computer manufactures who had entered the minicomputer market.

The characteristic features of third-generation computer are as follows:
1-They were much more powerful than the second-generation computers.They were capable of performing about 1 million instructions per second.
2-They were much smaller than second-generation computer requiring smaller space.
3-Although the heat dissipation was much less than second-generation computers,the rooms/areas in which the third-generation computers were located had to be properly air-conditioned.
4-They consumed much less power than the second-generation computers.
5-They were more reliable and less prone to hardware failures than the second-generation computers.Hence,the maintenance cost was much low.
6-They had faster and larger primary and secondary storage as compared to second-generation computers.
7-They were totally general-purpose machines suitable for both scientific and commercial applications.
8-Their manufacturing did not require manual assembly of individual components and electronic circuits resulting in reduced labor and cost involved at assembly stage.Hence,commercial production of these systems was easier and cheaper.However,highly sophisticated technology and expensive setup was required for the manufacture of IC Chips.
9-Standardisation of high-level programming languages usage and simultaneous use of these systems by a large number of users.
10-Time sharing operating system allowed interactive usage and simultaneous use of these systems by a large number of users.
11-Time sharing operating system helped in drastically improving the productivity of programmers,cutting down the time and cost of program development by several fold.
12-Time sharing operating system also made on-line systems feasible,resulting in the usage of these systems for new on-line applications.
13-Unbundling of software from hardware gave users of these systems an opportunity to invest only in software of their need and value.
14-The minicomputers of the third-generation made computers affordable even by smaller companies.

Fourth Generation (1975-1989):

The average number of electronic components packed on a silicon chip doubled each year after 1965.This progress soon led to era of large scale integration (LSI) when it was possible to integrate over 30,000 electronic components on a single chip,followed by very large integration (VLSI) when it was possible to integrate about one million electronic components on a single chip.This progress led to perform dramatic development - the creation of a microprocessor.A microprocessor contains all the circuits needed to perform arithmetic logic and control functions.the core activities of all computers,on a single chip.

Hence,it became possible to build a complete computer with a microprocessor,a few additional primary storage chips and other support circuitry.It started a new social revolution-the personal computer (PC)revolution.Overnight,computers became incredibly compact.They became inexpensive to make, and suddenly it became possible for anyone to own a computer.

During the fourth generation magnetic core memories were replaced by semiconductor memories,resulting in large random access memories with very fast access time.On the other hand,hard disks became cheaper,smaller and larger in capacity.Parallely,in addition to magnetic tapes,floppy disks became popular as a portable medium for porting programs and data from one computer system to another.

Another significant development during the fourth generation period was the spread of high-speed computer networking,which enabled multiple computers to be connected together to enable them to communicate and share data.Local areas networks (LANs) became popular for connecting several dozen or even several hundred computers within an organization or within an campus and wide area networks (WAN's) became popular for connecting computers located at larger distances.This gave rise to network of computers and distributed systems.

Ont the software front,there were several new developments which engaged which emerged to match the new technologies of the fourth generation.For example,several new operating systems were developed for PCs.Notable among these were MS-DS,MS Windows and Apple's propriety OS.Because the PCs were to be used by individuals who were not computer professionals to make computers more user friendly (easier to use) companies developed graphical user interface.A graphical user interface (GUI) Provides icons (pictures) and menus (list of choices)which based applications were more developed to make PCs a powerful tool.Notable among these were powerful word processing packages,which allowed easy development of documents,spreadsheet package which allowed easy manipulation and analysis of data organised in columns and rows and graphic packages.which allowed easy drawing of pictures and diagrams.Another very useful concept,which became popular during the fourth generation period was that of multiple windows on a single terminal screen.

This feature allowed users to simultaneously see the current status of several applications in separate windows on the same terminal screen.During the fourth generation period the UNIX operating system and the C programming language also became very popular
The characteristic features of the fourth-generation computer are as follows:

1-The PCs were smaller and cheaper than mainframes and minicomputers of the third generation.
2-The mainframes were much powerful than the third-generation systems.
3-Although the mainframes required proper air-conditioning of the room ares in which they were located,no air-conditioned was required for PCs.
4-They consumed much less power than third-generation computers.
5-They were more feature and less prone to hardware failures than the third-generation computers.Hence,the maintenance cost was negligible.
6-They had faster and larger primary and secondary storage as compared to third-generation computers.
7-They were totally general purpose machines.
8-Their manufacturing did not require manually of individual components into electronic circuits,resulting in reduced human labour and cost involved at Assembly stage.Hence,commercial production of these systems was easier and cheaper.However,highly sophisticated technology and expensive setup was required for the manufacture of LSI and VLSI chips.
9-Use of standart high-level programming languages allowed programs written for one computer to be easily ported to and executed on another computer.
10-Graphical User Interface (GUI) enabled new users to quickly learn how to use computers.
11-PC based applications made the PCs powerful tool for both office and home page.
12-Network of computers enabled sharing of resources like disks,printers etc among multiple computers and their users.They also enabled several new types o applications involving interaction among computers users of geographically distant locations.Computer Supported Cooperate Working (CSCW) or groupware is one such application,in which multiple members working on a single project and located at distant locations.Cooperate with each other by using a network of computers.
13-In addition to unbundled software,these systems also used add-on hardware feature,which allowed users o invest only in the hardware configuration and software of their need and value.
14-The PCs of the fourth-generation made computers affordable even by individuals for their personal use at home.

Fifth Generation (1989-Present):

The trend of further miniaturisation of electronic components,the dramatic increase in the power of microprocessor chips and capacity of main memory and hard disk continued in the fifth generation.The VLSI technology became ULSI (Ultra Large Scale Integration) technology in the fifth generation resulting in the production of microprocessor chips having ten million electronic components.In Fact,the speed of microprocessor and the size of main memory and hard disk doubled almost every eighteen months.As a result,many of the features found in the CPU's of large mainframe systems of the third and fourth generations became part of the microprocessor architecture in the fifth generation.This ultimately resulted in the availability of very powerful and compact computers becoming available at cheaper rates and the death of traditional large mainframe systems.

Due to this fast pace of advancement in computer technology we see more compact and more powerful computers being introduced almost every year at more or less the same price or even cheaper.Notable among these are portable computers,which give the power of a PC to their users even while travelling powerful desktop PCs and workstation,powerful servers and very powerful supercomputers.

Storage technology also advanced very fast making larger and larger main memory and disk storage available in newly introduced systems.During the fifth generation,optical disks also emerged as a popular portable mass storage media.They are more commonly known as CD-ROM (Compact Disk-Read Only Memory) because they are mainly used for storing programs and data which are only read (not written modified).

During the fifth generation period,there was tremendous outgrowth of computer networks.Communication technologies became faster day-by-day. and more computers were networked together.This trend resulted in the emergence and popularity of the Internet and associated technologies and applications.The Internet made it possible for computer users sitting across the globe to communicate with each other within minutes by the use of electronic mail (known as e-mail) facility.A vast ocean of information became readily available to the computer users through the World Wide Web (known as WWW).Moreover,several new types of exciting applications like electronic commerce,virtual libraries,virtual classrooms,distance education etc emerged during this period.

The tremendous processing power and the massive storage capacity of the fifth generation computers also made them a very useful and popular tool for a wide range of multimedia applications which deal with information containing text,graphics,animation,audio and video data.In general the data size for multimedia information is much large than textual information because representation of graphics,animation,audio or video media. in digital form requires much larger number of bits than that required for representation of textual information.

The characteristic features of fifth-generation computers are as follows:
1-Portable PCs (called notebook computers) are much more smaller and handy than the PCs of the fourth generation allowing users to use computing facility even while travelling.
2-The desktop PCs and workstations are several times more powerful than the PCs of the fourth generation
3-The mainframes are several times more powerful than the mainframe systems of the fourth generation.
4-Although the mainframes require proper air-conditioning of the rooms/areas in which they are located,no air-conditioning is normally required for the notebook computers,desktop PCs and workstations.
5-They consume much less power than their predecessors.
6-They are more reliable and less prone to hardware failures than their predecessors.Hence,the maintenance cost is negligible.
7-Many of the large scale systems of fifth generation have hot plug gable feature.This feature enables a failed component to be replaces with a new one,without the need to shutdown the system,allowing the up time of the system to be very high.
8-They have faster and larger primary and secondary storage as compared to their predecessors.
9-They are totally general purpose machines.
10-Their manufacturing does not require manual assembly of individual components into electronic circuits,resulting in reduced human labour and cost involved at assembly stage.Hence,commercial production of these systems is easier and cheaper.However,highly sophisticated technology and expensive setup (affordable only by a few organisations in the world) is required for the manufacture of ULSI Chips.
11-Use of standard high-level programming languages allows programs written for one computer to be easily ported and executed on another computer.
12-More user-friendly interfaces with multimedia features make the systems easier to learn and use by anyone including small children.
13-Never and more powerful applications including multimedia applications make the systems more useful in every occupation.
14-The explosion in the size of the Internet,occupied with Internet-based tools and applications have made these systems influence the life of even common men and women.
15-These systems also use the concept of unbundled software and add-on hardware allowing the users invest only in the hardware configuration and software of their need and value.
16-With so many types of computers in all price ranges,today we have a computer for almost any Tye user,whether the user is a small child or a scientist of world-fame.


We have looked at the history of computing divided into five generations, and we have seen how quickly things have changed in the last few decades.However,the rate of technological progress in this area is not slowing down at all.As we enter into 21 century,future generations of computers will evolve with higher capability and user friendliness.In fact,the fastest growth period in the history of computing may be still ahead.