computer organisation nd architecture

124
E-528-529, sector-7, Dwarka, New delhi-110075 (Nr. Ramphal chowk and Sector 9 metro station) Ph. 011-47350606, (M) 7838010301-04 www.eduproz.in Educate Anytime...Anywhere... "Greetings For The Day" About Eduproz We, at EduProz, started our voyage with a dream of making higher education available for everyone. Since its inception, EduProz has been working as a stepping-stone for the students coming from varied backgrounds. The best part is – the classroom for distance learning or correspondence courses for both management (MBA and BBA) and Information Technology (MCA and BCA) streams are free of cost. Experienced faculty-members, a state-of-the-art infrastructure and a congenial environment for learning - are the few things that we offer to our students. Our panel of industrial experts, coming from various industrial domains, lead students not only to secure good marks in examination, but also to get an edge over others in their professional lives. Our study materials are sufficient to keep students abreast of the present nuances of the industry. In addition, we give importance to regular tests and sessions to evaluate our students’ progress. Students can attend regular classes of distance learning MBA, BBA, MCA and BCA courses at EduProz without paying anything extra. Our centrally air-conditioned classrooms, well-maintained library and well-equipped laboratory facilities provide a comfortable environment for learning. Honing specific skills is inevitable to get success in an interview. Keeping this in mind, EduProz has a career counselling and career development cell where we help student to prepare for interviews. Our dedicated placement cell has been helping students to land in their dream jobs on completion of the course. EduProz is strategically located in Dwarka, West Delhi (walking distance from Dwarka Sector 9 Metro Station and 4- minutes drive from the national highway); students can easily come to our centre from anywhere Delhi and neighbouring Gurgaon, Haryana and avail of a quality-oriented education facility at apparently no extra cost. Why Choose Edu Proz for distance learning?

Upload: edudivya

Post on 28-Jan-2015

136 views

Category:

Education


11 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Computer organisation nd architecture

E-528-529, sector-7,

Dwarka, New delhi-110075

(Nr. Ramphal chowk and Sector 9 metro station)

Ph. 011-47350606,

(M) 7838010301-04

www.eduproz.in

Educate Anytime...Anywhere...

"Greetings For The Day"

About Eduproz

We, at EduProz, started our voyage with a dream of making higher education available for everyone. Since its

inception, EduProz has been working as a stepping-stone for the students coming from varied backgrounds. The best

part is – the classroom for distance learning or correspondence courses for both management (MBA and BBA) and

Information Technology (MCA and BCA) streams are free of cost.

Experienced faculty-members, a state-of-the-art infrastructure and a congenial environment for learning - are the few

things that we offer to our students. Our panel of industrial experts, coming from various industrial domains, lead

students not only to secure good marks in examination, but also to get an edge over others in their professional lives.

Our study materials are sufficient to keep students abreast of the present nuances of the industry. In addition, we give

importance to regular tests and sessions to evaluate our students’ progress.

Students can attend regular classes of distance learning MBA, BBA, MCA and BCA courses at EduProz without

paying anything extra. Our centrally air-conditioned classrooms, well-maintained library and well-equipped laboratory

facilities provide a comfortable environment for learning.

Honing specific skills is inevitable to get success in an interview. Keeping this in mind, EduProz has a career

counselling and career development cell where we help student to prepare for interviews. Our dedicated placement

cell has been helping students to land in their dream jobs on completion of the course.

EduProz is strategically located in Dwarka, West Delhi (walking distance from Dwarka Sector 9 Metro Station and 4-

minutes drive from the national highway); students can easily come to our centre from anywhere Delhi and

neighbouring Gurgaon, Haryana and avail of a quality-oriented education facility at apparently no extra cost.

Why Choose Edu Proz for distance learning?

Page 2: Computer organisation nd architecture

• Edu Proz provides class room facilities free of cost.

• In EduProz Class room teaching is conducted through experienced faculty.

• Class rooms are spacious fully air-conditioned ensuring comfortable ambience.

• Course free is not wearily expensive.

• Placement assistance and student counseling facilities.

• Edu Proz unlike several other distance learning courses strives to help and motivate pupils to get high grades thus ensuring that they are well placed in life.

• Students are groomed and prepared to face interview boards.

• Mock tests, unit tests and examinations are held to evaluate progress.

• Special care is taken in the personality development department.

"HAVE A GOOD DAY"

Karnataka State Open University

(KSOU) was established on 1st June 1996 with the assent of H.E. Governor of Karnataka

as a full fledged University in the academic year 1996 vide Government notification

No/EDI/UOV/dated 12th February 1996 (Karnataka State Open University Act – 1992).

The act was promulgated with the object to incorporate an Open University at the State level for

the introduction and promotion of Open University and Distance Education systems in the

education pattern of the State and the country for the Co-ordination and determination of

standard of such systems. Keeping in view the educational needs of our country, in general, and

state in particular the policies and programmes have been geared to cater to the needy.

Karnataka State Open University is a UGC recognised University of Distance Education Council

(DEC), New Delhi, regular member of the Association of Indian Universities (AIU), Delhi,

permanent member of Association of Commonwealth Universities (ACU), London, UK, Asian

Association of Open Universities (AAOU), Beijing, China, and also has association with

Commonwealth of Learning (COL).

Karnataka State Open University is situated at the North–Western end of the Manasagangotri

campus, Mysore. The campus, which is about 5 kms, from the city centre, has a serene

atmosphere ideally suited for academic pursuits. The University houses at present the

Administrative Office, Academic Block, Lecture Halls, a well-equipped Library, Guest House

Page 3: Computer organisation nd architecture

Cottages, a Moderate Canteen, Girls Hostel and a few cottages providing limited

accommodation to students coming to Mysore for attending the Contact Programmes or Term-

end examinations.

MC0062-1.1 Introduction to Number Systems

Introduction to Number Systems

The binary numbering system and the representation of digital codes with binary representation

are the fundamentals of digital electronics. In this chapter a comprehensive study of different

numbering systems like decimal, binary, octal and hexadecimal, are carried out. The conversion

and representation of a given number in any numbering system to other and a detailed analysis of

operations such as binary addition, multiplication, division and subtraction is introduced. Binary

subtraction can be carried out with the help of adder circuits using complementary number

system is been introduced. Special codes like BCD codes are introduced.

Objectives:

By the end of this chapter, reader is expected

• To have the complete understanding of the meaning and usefulness of numbering system.

• To carry the arithmetic operations such as addition, subtraction, multiplication and

division on binary, octal and hexadecimal numbers.

• To convert a given number converted to the different formats.

To explain the Usefulness of complementary numbering system in arithmetic, Binary coded

decimal (BCD) numbering systems.

MC0062-1.2 The Decimal Number System

The Decimal Number System

The Decimal Number System uses base 10 and represented by arranging the 10 symbols i.e. 0

through 9, where these symbols were known as digits. The position of each symbol in a given

sequence has a certain numerical weight. It makes use of a Decimal point.

The decimal number system is thus represented as a weighted sum representation of symbols.

Table 1.1 represents the weight associated with the symbol in decimal numbering system.

…. 10000 1000 100 10 1 • 0.1 0.01 0.001 ….

…. 104 10

3 10

2 10

1 10

0 Decimal point 10

-1 10

-2 10

-3 ….

Page 4: Computer organisation nd architecture

Table 1.1: Weights associated with the position in Decimal numbering system.

Example : 835.25 = 8 x 102 + 3 x 10

1 + 5 x 10

0 + 2 x 10

-1 + 5 x 10

-2

= 8 x 100 + 3 x 10 + 5 x 1 + 2 x 0.1 + 5 x 0.01

= 800 + 30 + 5 + 0.2 + 0.05

= 835.25

The left most digit, which has the highest weight, is called the most significant digit, and the right

most digit, which has the least weight, is called the least significant digit. The digits to the right

side of the decimal point are known as fractional part and the digits to the left side of the

decimal point are known as integer part.

Any number of zeros can be inserted to the left of most significant digit of integer part and to the

right side of the least significant digit of fractional part, which does not alter the value of the

number represented.

Self Assessment Question 1:

Represent the following decimal numbers with the associated weights

a) 1395 b) 7456 c) 487.46 d) 65.235

MC0062-1.3 The Binary Numbering System

The Binary Numbering System

The Binary Number System uses base 2 and represented by 0 and 1, these are known as bits also

as Binary Digits. The position of each bit in a given sequence has a numerical weight. It makes

use of a Binary point.

Thus binary number system can be represented as a weighted sum of bits. Table 1.2 represents

the weight associated in binary numbering system.

Equivalent weight in

decimal …. 16 8 4 2 1 • 0.5 0.25 0.125 ….

Binary Powers …. 24 2

3 2

2 2

1 2

0

Binary

point 2

-1 2

-2 2

-3 ….

Table 1.2: Weights associated with the position in Binary numbering system.

Example: 101.11(2) = 1 x 22 + 0 x 2

1 + 1 x 2

0 + 1 x 2

-1 + 1 x 2

-2

Self Assessment Question 2:

Page 5: Computer organisation nd architecture

Represent the following decimal numbers with the associated weights

a) 11001.111(2) b) 11.101(2) c) 11011(2) d) 0.11101(2)

Counting in Binary

Counting in binary is analogous to the counting methodology used in decimal numbering

system. The symbols available are only 0 and 1. Count begins with 0 and then 1. Since all the

symbols are exhausted start with combining two-bit combination by placing a 1 to the left of 0

and to get 10 and 11. Similarly continuing placing a 1 to the left again 100, 101, 110 and 111 are

obtained. Table 1.3 illustrates the counting methodology in binary systems.

Decimal Count Binary Count 5 bit notation

0 0 00000

1 1 00001

2 10 00010

3 11 00011

4 100 00100

5 101 00101

6 110 00110

7 111 00111

8 1000 01000

9 1001 01001

10 1010 01010

11 1011 01011

12 1100 01100

13 1101 01101

14 1110 01110

15 1111 01111

16 10000 10000

17 10001 10001

18 10010 10010

19 10011 10011

20 10100 10100

21 10101 10101

22 10110 10110

Page 6: Computer organisation nd architecture

23 10111 10111

24 11000 11000

25 11001 11001

26 11010 11010

27 11011 11011

28 11100 11100

29 11101 11101

30 11110 11110

31 11111 11111

Table 1.3: illustrations for counting in Binary

Binary to Decimal Conversion

Weighted Sum Representation: A Binary Number is represented with its associated weights. The

right most bit, which has a value 20 = 1, is known to be Least Significant Bit (LSB). The weight

associated with each bit varies from right to left by a power of two. In the fractional binary

number representation, the bits are also placed to the right side of the binary point and the

equivalent decimal weights for the bit location are shown in Table 1.2. The value of a given

binary number can be determined as the weighted sum of individual weights.

Example: 101.11(2) = 1 x 22 + 0 x 2

1 + 1 x 2

0 + 1 x 2

-1 + 1 x 2

-2

= 1 x 4 + 0 x 1 + 1 x 1 + 1 x 0.5 + 1 x 0.25

= 4 + 0 + 1 + 0.5 + 0.25

= 5.75(10)

1101(2) = 1 x 23 + 1 x 2

2 + 0 x 2

1 + 1 x 2

0

= 1 x 8 + 1 x 4 + 0 x 2 + 1 x 1

= 8 + 4 + 0 + 1

= 13(10)

0.111(2) = 0 x 20 + 1 x 2

-1 + 1 x 2

-2 + 1 x 2

-3

Page 7: Computer organisation nd architecture

= 0 x 1 + 1 x 0.5 + 1 x 0.25 + 1 x 0.125

= 0 + 0.5 + 0.25 + 0.125

= 0.875(10)

Self Assessment Question 3:

Convert the following binary numbers to decimal

a.) 1010(2) b.) 11.101(2) c.) 1011110001(2) d.) 1.11101(2)

Decimal to Binary Conversion

A given number, which has decimal form, are represented in binary in two ways.

• Sum of Weight Method

• Repeated Division Method

• Repeated Multiplication Method

Sum of Weight Method

Table 1.2 represents the weights associated with individual bit position. The weight associated

with a bit increases by a power of two for the each bit placed to the left. For the bit positions to

the right of binary point the weight decrease by a power to two from left to right.

Find out all Binary weight values, less than the given decimal number. Determine a set of binary

weight values when added should sum up equal to the given decimal number.

Example: To find out binary equivalent of 43, Note that Binary values which are less than 43 are

20 = 1, 2

1 = 2, 2

2 = 4, 2

3 = 8, 2

4 = 16, 2

5 = 32.

43 (10) = 32 + 8 + 2 + 1 = 25 + 2

3 + 2

1 + 2

0

i.e. The set of weights 25, 2

3, 2

1, 2

0 when summed up equals to the given decimal number 43.

By placing a 1 in the appropriate weight positions 25, 2

3, 2

1 and 2

0 and a 0 in the other positions,

a equivalent binary representation can be obtained.

25 2

4 2

3 2

2 2

1 2

0

1 0 1 0 1 1 (2) = 43 (10)

Page 8: Computer organisation nd architecture

Example: To find out binary equivalent of 0.625 (10) = 0.5 + 0.125 = 2-1

+ 2-3

= 0.101 (2)

Example: to find the binary equivalent of 33.3125 (10)

33 (10) = 32 + 1 = 25 + 2

0 = 1 0 0 0 0 1 (2)

0.3125 (10) = 0.25 + 0.0625 = 2-2

+ 2-4

= 0 . 0 1 0 1 (2)

33.3125 (10) = 1 0 0 0 0 1 . 0 1 0 1 (2)

Self Assessment Question 4:

Represent the following decimal numbers into binary using sum of weight method.

a) 1101.11(2) b) 111.001(2) c) 10001.0101(2)

Repeated Division Method

Repeated division method is the more systematic method usually used in whole number

decimal to binary conversion. Since binary number system uses base – 2, given decimal number

is repeatedly divided by 2 until there is a 0 quotient. While division is carried out the

remainders generated at each division, i.e. 0 or 1, are written down separately. The first

remainder is noted down as LSB in the binary number and the last remainder as MSB.

Example: Write the binary equivalent of 29 (10) and 45 (10).

Page 9: Computer organisation nd architecture

Repeated Multiplication

Repeated multiplication method is the more systematic method usually used in fraction part of

decimal number in decimal to binary conversion. Since binary number system uses base – 2,

given fraction is repeatedly multiplied by 2 until there is a 0 fraction part left. While

multiplication is carried out the integer parts generated at each multiplication, i.e. 0 or 1, are

written down separately. The first integer part thus generated is noted down as first fractional

bit in the binary number and the subsequent integers generated are placed left to right.

Example: Write the binary equivalent of 0.625 (10) and 0.3125 (10).

Example: Write the binary Equivalent of 17.135

Page 10: Computer organisation nd architecture

Therefore 17.135 (10) = 1 0 0 0 1 . 0 0 1 0 …… (2)

Self Assessment Question 5:

Represent the following decimal numbers into binary using repetitive divisions and

multiplication method.

a) 1101.11(2) b) 111.001(2) c) 10001.0101(2

MC0062-1.4 The Octal Numbering System

The Octal Numbering System

The Octal Number System uses base 8 and uses symbols 0, 1, 2, 3, 4, 5, 6 and 7, these are known

as Octal digits. The position of each bit in a given sequence has a numerical weight. It makes use

of a Octal point.

Thus binary number system can be represented as a weighted sum of digits. Table 1.2 represents

the weight associated in binary numbering system.

in decimal

Equivalent weight … 4096 512 64 8 1 • 0.125 0.015625 … …

… 84 8

3 8

2 8

1 8

0

Octal

point 8

-1 8

-2 … …

Table 1.4: Weights associated with the position in Octal numbering system.

Page 11: Computer organisation nd architecture

Example: 710.16 (8) = 7 x 82 + 1 x 8

1 + 0 x 8

0 + 1 x 8

-1 + 6 x 8

-2

Self Assessment Question 6:

Give the weighed sum representation for the following octal numbers.

a) 734.52(8) b) 1234.567(8) c) 345.1271(8)

Counting in Octal

Counting with octal number system is analogous to the counting methodology used in decimal

and in binary numbering system. The symbols available are from 0 to 7. Count begins with 0

then 1 and till 7. Since all the symbols are exhausted start with combining two-digit

combination by placing a 1 to the left of 0 to get 10, 11, 12 … 17. Similarly, continuing placing a

2, 3 …and 7. Again place 1 to the left again 100, 101, 102…107, 110….170, 200 … 270 etc.

Octal to Decimal Conversion

Weighted Sum Representation: An Octal Number is represented with its associated weights as

shown in Table 1.4. The right most value, has a value 80 = 1, is known to be Least Significant

Digit. The weight associated with each octal symbol varies from right to left by a power of eight.

In the fractional octal number representation, the bits are placed to the right side of the octal

point. The value of a given octal number thus can be determined as the weighted sum.

Example:

234.32 (8) = 2 x 82 + 3 x 8

1 + 4 x 8

0 + 3 x 8

-1 + 2 x 8

-2

= 2 x 64 + 3 x 8 + 4 x 1 + 3 x 0.125 + 2 x 0.015625

= 128 + 24 + 4 + 0.125 + 0.03125

= 156.15625 (10)

65 (8) = 6 x 81

+ 5 x 80

= 6 x 8 + 5 x 1

= 48 + 5

= 53 (10)

0.427 (8) = 4x 8-1

+ 2 x 8-2

+ 7 x 8-3

Page 12: Computer organisation nd architecture

= 4 x 0.125 + 2 x 0.015625 + 7 x 0.001953125

= 0.544921875 (10)

Decimal to Octal Conversion

A given number, which has decimal form, are represented in binary in two ways.

• Sum of Weight Method

• Repeated Division Method

• Repeated Multiplication Method

Sum of Weight Method

Table 1.4 represents the weights associated with individual symbol position in octal numbering

system. Find out all octal weight values, less than the given decimal number. Determine a set of

binary weight values when added should sum up equal to the given decimal number.

Example: To find out binary equivalent of 99, Note that octal values which are less than 99 are

80 = 1, 8

1 = 8, 8

2 = 64.

99 (10) = 1 x 64 + 4 x 8 + 3 x 1 �

= 1 x 82 + 4 x 8

1 + 3 x 8

0

= 1 4 3 (8)

Self Assessment Question 7:

Represent the following decimal numbers into octal using sum of weight method.

a) 789.45 b) 654 c) 0.678 d) 987.654

Repeated Division Method

Repeated division method is the more systematic method usually used in whole number decimal

to octal conversion. Since Octal number system has base – 8, given decimal number is repeatedly

divided by 8 until there is a 0 quotient. While division is carried out the remainders generated at

each division, are written down separately. The first remainder is noted down as Lease

Significant Digit (LSD) in the binary number and the last remainder as Most Significant Digit

(MSD).

Example: Write the binary equivalent of 792 (10) and 1545 (10).

Page 13: Computer organisation nd architecture

Repeated Multiplication

Repeated multiplication method is the more systematic method usually used for the fractional

part of decimal to octal conversion. Given fraction of decimal number is repeatedly multiplied by

8 until there is a 0 fraction part left. While multiplication is carried out the integer parts

generated at each multiplication, are written down separately. The first integer part thus

generated is noted down as first fractional bit in the octal number and the subsequent integers

generated are placed left to right.

Example: Write the binary equivalent of 0.3125 (10).

Example: Write the binary Equivalent of 541.625 (10)

Page 14: Computer organisation nd architecture

Therefore 541.625 (10) = 1035.5 (8)

Self Assessment Question 8:

Represent the following decimal numbers into octal using repetitive multiplication and division

method.

a) 789.45 b) 654 c) 0.678 d) 987.654

Octal to Binary Conversion

There is a direct relation between the bases of Octal and Binary number systems. i.e. 8 = 23.

Which indicates that one symbol of octal can be used to replace one 3-bit representation in

binary system. There are totally 8 combinations with 3-bit binary representation from 000 to 111,

which can be mapped octal symbols 0 to 7.

Octal Digit Binary Bit

0 000

1 001

2 010

3 011

4 100

5 101

6 110

7 111

Page 15: Computer organisation nd architecture

Table 1.5: Octal number and equivalent 3-bit Binary representation

To convert a given octal number to binary, simply replace the octal digit by its equivalent 3-bit

binary representation as shown in Table 1.5.

Example: 4762.513 (8) = 4 7 6 2 . 5 1 3

= 100

111

110

010 . 101

001

011

(2)

Therefore 4762.513 (8) = 100111110010.101001011 (2)

Self Assessment Question 9:

Represent the following octal numbers into binary.

a.) 735.45(8) b.) 654(8) c.) 0.674(8) d.) 123.654(8)

1.4.5 Binary to Octal Conversion

To represent a given binary number in octal representation is also a straight forward conversion

process. Given binary number is clubbed into a group of three bits towards left from the octal

point and grouped towards right from the octal point. Additional 0’s, if required, can be added

to the left of leftmost bit of integer part and to the right of rightmost bit in the fractional part,

while grouping.

Example: 1011001.1011 (2) = 001

011

001 . 101

100 (2)

= 1 3 1 . 5 4 (8)

Therefore 1011001.1011(2) = 131.54(8)

Note: Additional 0’s were used while grouping 3-bits.

Example: 101111.001 (2) = 101

111 . 001

(2)

Page 16: Computer organisation nd architecture

= 5 7 . 1 (8)

Therefore 101111.001 (2) = 57.1 (8)

Self Assessment Question 10:

Represent the following binary numbers into octal.

a) 1101.11(2) b) 10101.0011(2) c) 0.111(2) d) 11100(2)

MC0062-1.5 The Hexadecimal Numbering

System

The Hexadecimal Numbering System

The Hexadecimal Number System uses base 16 and uses alpha-numeric symbols 0, 1, 2, 3, 4, 5,

6, 7, 8, 9 and A, B, C, D, E, F. It uses ten decimal digits and six numeric symbols. Therefore base-

16 is referred for hexadecimal numbering system. Unless like decimal, binary and octal systems

which were used in weighted number representation, hexadecimal numbering system is used

to replace 4-bit combination of binary. This justifies the usage of hexadecimal numbering

system in microprocessors, soft-computations, assemblers, and in digital electronic

applications.

Counting in hexadecimal numbering system is similar way to the counting methodology used in

decimal, binary and in octal numbering systems as discussed earlier in this chapter.

Hexadecimal to Binary Conversion

There is a direct relation between the bases used in Hexadecimal and Binary number systems. i.e.

16 = 24. Which indicates that one symbol of hex numbering can be used to replace one 4-bit

representation in binary system. There are totally 16 combinations with 4-bit binary

representation from 0000 to 1111, which can be mapped hex symbols 0 to 9 and A to F.

Octal Digit Binary Bit

0 0000

1 0001

2 0010

3 0011

4 0100

5 0101

Page 17: Computer organisation nd architecture

6 0110

7 0111

8 1000

9 1001

A 1010

B 1011

C 1100

D 1101

E 1110

F 1111

Table 1.6: Hexadecimal number and equivalent 4-bit Binary representation

To convert a given hexadecimal number to binary, simply replace the hex-digit by its equivalent

4-bit binary representation as shown in Table 1.6.

Example: 1A62.B53 (16) = 1 A 6 2 . B 5 3(16)

= 0001

1010

0110

0010 . 1011

0101

0011(2)

Therefore 1A62.B53 (16) = 0001101001100010.101101010011 (2)

Example: 354.A1 (16) = 3 5 4 . A 1 (16)

0011 0101 0100 . 1010 0001 (2)

Therefore 354.A1 (16) = 001101010100.10100001 (2)

Self Assessment Question 11:

Represent the following hexadecimal numbers into binary.

a) 8AC8.A5(16) b) 947.A88(16) c) A0.67B(16) d) 69AF.EDC(16)

Binary to Hexadecimal Conversion

Page 18: Computer organisation nd architecture

To represent a given binary number in hexadecimal representation is a straight forward

conversion process. Binary number given is clubbed into a group of 4-bits starting form the

hexadecimal point towards left and towards right. Additional 0’s, if required, can be added to

the left of leftmost bit of integer part and to the right of rightmost bit in the fractional part,

while grouping. Equivalent hexadecimal symbol is placed for a 4-bit binary group to have the

conversion.

Example: 1011001.1011 (2) = 0101

1001 . 1011

(2)

= 5 9 . B (16)

Therefore 1011001.1011 (2) = 59.B (16)

Note: Additional 0’s were used while grouping 4-bits.

Example: 101111.001 (2) = 0010

1111 . 0010 (2)

= 2 F . 2 (16)

Therefore 101111.001 (2) = 2F.2 (16)

Note: Hexadecimal to Octal Conversion can be done by first converting a given Hexadecimal

number to binary and then converting the resultant binary to octal system. Similarly given octal

number can be converted to hexadecimal by converting first to the binary system and then to

hexadecimal system.

Self Assessment Question 12:

Represent the following binary numbers into Hexadecimal.

a.) 1101.11(2) b.) 10101.0011(2) c.) 0.111(2) d.) 11100(2)

Hexadecimal to Decimal Conversion

Hexadecimal numbers can be represented with their associated positional weights as indicated

in Table 1.7. Positional weights increases by a power of 16 towards left of the hexadecimal

point and decrease by a power of 16 towards right.

Equivalent

weight in

decimal

…. 4096 256 16 1 • 0.0625 0.00390625 ….

…. 163 16

2 16

1 16

0 Hexadecimal 16

-1 16

-2 ….

Page 19: Computer organisation nd architecture

point

Table 1.7: Weights associated with the position in hexadecimal numbering system.

Hexadecimal to Octal Conversion

Weighted Sum Representation: Hexadecimal Number is represented with its associated weights

as shown in Table 1.4. The right most value, has a value 80 = 1, is known to be Least Significant

Digit. The weight associated with each octal symbol varies from right to left by a power of eight.

In the fractional octal number representation, the bits are placed to the right side of the octal

point. The value of a given octal number thus can be determined as the weighted sum.

Example: 234.32 (8) = 2 x 82 + 3 x 8

1 + 4 x 8

0 + 3 x 8

-1 + 2 x 8

-2

= 2 x 64 + 3 x 8 + 4 x 1 + 3 x 0.125 + 2 x 0.015625

= 128 + 24 + 4 + 0.125 + 0.03125

= 156.15625 (10)

65 (8) = 6 x 81

+ 5 x 80

= 6 x 8 + 5 x 1

= 48 + 5

= 53 (10)

0.427 (8) = 4x 8-1

+ 2 x 8-2

+ 7 x 8-3

= 4 x 0.125 + 2 x 0.015625 + 7 x 0.001953125

= 0.544921875 (10)

Self Assessment Question 13:

Represent the following hexadecimal numbers into octal numbers.

a) 8AC8.A5(16) b) 947.A88(16) c) A0.67B(16) d) 69AF.EDC(16)

Decimal to Octal Conversion

A given number, which has decimal form, are represented in binary in two ways.

• Sum of Weight Method

• Repeated Division Method

Page 20: Computer organisation nd architecture

• Repeated Multiplication Method

Sum of Weight Method

Table 1.4 represents the weights associated with individual symbol position in octal numbering

system. Find out all octal weight values, less than the given decimal number. Determine a set of

binary weight values when added should sum up equal to the given decimal number.

Example: To find out binary equivalent of 99, Note that octal values which are less than 99 are

80 = 1, 8

1 = 8, 8

2 = 64.

99 (10) = 1 x 64 + 4 x 8 + 3 x 1 �

= 1 x 82 + 4 x 8

1 + 3 x 8

0

= 1 4 3 (8)

Repeated Division Method

Repeated division method is the more systematic method usually used in whole number decimal

to octal conversion. Since Octal number system has base – 8, given decimal number is repeatedly

divided by 8 until there is a 0 quotient. While division is carried out the remainders generated at

each division, are written down separately. The first remainder is noted down as Lease

Significant Digit (LSD) in the binary number and the last remainder as Most Significant Digit

(MSD).

Repeated Multiplication

Page 21: Computer organisation nd architecture

Repeated multiplication method is the more systematic method usually used for the fractional

part of decimal to octal conversion. Given fraction of decimal number is repeatedly multiplied

by 8 until there is a 0 fraction part left. While multiplication is carried out the integer parts

generated at each multiplication, are written down separately. The first integer part thus

generated is noted down as first fractional bit in the octal number and the subsequent integers

generated are placed left to right.

Integer Part

0.625 X 8

0.625 (10) = 0.5 (8) 5 � 0.0

Therefore 541.625 (10) = 1035.5 (8)

Self Assessment Question 14:

Represent the following hexadecimal numbers into octal.

a) 8AC8.A5(16) b) 947.A88(16) c) A0.67B(16) d) 69AF.EDC(16)

MC0062-1.6 Binary Arithmetic

Binary Arithmetic

Let us have a study on how basic arithmetic can be performed on binary numbers.

Binary Addition

Page 22: Computer organisation nd architecture

There are four basic rules with Binary Addition

0(2) + 0(2) = 0(2)

0(2) + 1(2) = 0(2) Addition of two single bits result into single bit

1(2) + 0(2) = 1(2)

1(2) + 1(2) = 10(2) Addition of two 1’s resulted into Two bits

Example: perform the binary addition on the followings

1 1

011(2)

+ 011(2)

3(10)

+ 3(10)

1 1 1 1

1101(2)

+ 0111(2)

1

13(10)

+ 07(10)

1

11100(2)

+ 10011(2)

1

28(10)

+ 19(10)

110(2) 6(10) 10100(2) 20(10) 101111(2) 47(10)

Binary Subtraction

There are four basic rules associated while carrying Binary subtraction

0(2) – 0(2) = 0(2)

1(2) – 1(2) = 0(2) �

1(2) – 0(2) = 1(2)

0(2) – 1(2) = invalid there fore obtain a borrow 1 from MSB and perform binary

subtraction

10(2) – 1(2) = 1(2) �

Note: In last rule it is not possible to subtract 1 from 0 therefore a 1 is borrowed from immediate

next MSB to have a value of 10 and then the subtraction of 1 from 10 is carried out

Example: perform the binary subtraction on the followings

011(2)

– 011(2)

3(10)

– 3(10)

1 1

1101(2)

– 0111(2)

13(10)

– 07(10)

1 1

11100(2)

– 10011(2)

28(10)

– 19(10)

Page 23: Computer organisation nd architecture

000(2) 0(10) 0110(2) 06(10) 01001(2) 09(10)

Binary Multiplication

There are four basic rules associated while carrying Binary multiplication

0(2) x 0(2) = 0(2)

0(2) x 1(2) = 0(2) �

1(2) x 0(2) = 0(2)

1(2) x 1(2) = 1(2) �

Note: While carrying binary multiplication with binary numbers the rule of shift and add is made

used similar to the decimal multiplication. i.e. multiplication is first carried out with the LSB of

the multiplicand on the multiplier bit by bit basis. While multiplying with the MSB bits, first the

partial sum is obtained. Then result is shifted to the left by one bit and added to the earlier result

obtained.

Example: perform the binary multiplication on the followings

011(2)

x 1(2)

3(10)

x 1(10)

1101(2)

x 11(2)

13(10)

x 03(10)

011(2) 3(10)

1101(2)

1101 (2)

100111(2) 39(10)

Binary Division

The binary division is similar to the decimal division procedure

Example: perform the binary division

101(2) 10.1(2)

11

1111

11

110

1111.0

110

001

000

0011

000

Page 24: Computer organisation nd architecture

11

11

110

110

00 000

Complementary numbering systems: 1’s and 2’s Complements

1’s complement of a given binary number can be obtained by replacing all 0s by 1s and 1s by 0s.

Let us describe the 1’s complement with the following examples

Examples: 1’s complement of the binary numbers

Binary Number 1’s Complement

1101110 0010001

111010 000101

110 001

11011011 00100100

Binary subtraction using 1’s complementary Method:

Binary number subtraction can be carried out using the method discussed in binary subtraction

method. The complementary method also can be used. While performing the subtraction the

1’s complement of the subtrahend is obtained first and then added to the minuend. Therefore

1’s complement method is useful in the sense subtraction can be carried with adder circuits of

ALU (Arithmetic logic unit) of a processor.

Two different approaches were discussed here depending on, whether the subtrahend is smaller

or larger compared with minuend.

Case i) Subtrahend is smaller compared to minuend

Step 1: Determine the 1’s complement of the subtrahend

Step 2: 1’s complement is added to the minuend, which results in a carry generation known as

end-around carry.

Step 3: From the answer remove the end-around carry thus generated and add to the answer.

Example: Perform the subtraction using 1’s complement method

Binary Subtraction

(usual method)

Binary Subtraction

( 1’s complement method)

11101(2)

11101(2)

Page 25: Computer organisation nd architecture

– 10001(2) + 01110(2) 1’s complement of 10001

01100(2)

1 01011(2) end-around carry generated

� + 1(2) add end-around carry

01100(2) Answer

Case ii) Subtrahend is larger compared to minuend

Step 1: Determine the 1’s complement of the subtrahend

Step 2: Add the 1’s complement to the minuend and no carry is generated.

Step 3: Answer is negative singed and is in 1’s complement form. Therefore obtain

the 1’s complement of the answer and indicate with a negative sign.

Example: Perform the subtraction using 1’s complement method

Binary Subtraction

(usual method)

Binary Subtraction

( 1’s complement method)

10001(2)

– 11101(2)

10001(2)

+ 00010(2) 1’s complement of 10001

– 01100(2) 10011(2) No carry generated. Answer is negative and is in 1’s complement form

– 01100(2) Answer

Binary subtraction using 2’s complementary Method:

2’s complement of a given binary number can be obtained by first obtaining 1’s complement

and then add 1 to it. Let us obtain the 2’s complement of the following.

Examples: 2’s complement of the binary numbers

Binary Number 2’s Complement

1101110

0010001

+ 1

0010010

111010

000101

+ 1

Page 26: Computer organisation nd architecture

000110

110

001

+ 1

010

11011011

00100100

+ 1

00100101

Binary number subtraction can be carried out using 2’s complement method also. While

performing the subtraction the 2’s complement of the subtrahend is obtained first and then added

to the minuend.

Two different approaches were discussed here depending on, whether the subtrahend is smaller

or larger compared with minuend.

Case i) Subtrahend is smaller compared to minuend

Step 1: Determine the 2’s complement of the subtrahend

Step 2: 2’s complement is added to the minuend generating an end-around carry.

Step 3: From the answer remove the end-around carry and drop it.

Example: Perform the subtraction using 2’s complement method

Binary Subtraction

(usual method)

Binary Subtraction

( 1’s complement method)

11101(2)

– 10001(2)

11101(2)

+ 01111(2) 2’s complement of 10001

01100(2)

1 01100(2) end-around carry generated

drop the carry

01100(2) Answer

Case ii) Subtrahend is larger compared to minuend

Page 27: Computer organisation nd architecture

Step 1: Determine the 2’s complement of the subtrahend

Step 2: Add the 2’s complement to the minuend and no carry is generated.

Step 3: Answer is negative singed and is in 2’s complement form. Therefore obtain the 2’s

complement of the answer and indicate with a negative sign.

Example: Perform the subtraction using 1’s complement method

Binary Subtraction

(usual method)

Binary Subtraction

( 2’s complement method)

10001(2)

– 11101(2)

10001(2)

+ 00011(2) 1’s complement of 10001

– 01100(2) 10000(2) No carry generated. Answer is negative and is in 1’s complement form

– 01100(2) Answer

Self Assessment Question 15:

Perform the following subtractions using 1’s complement and 2’s complement methods

a) 1101(2) – 1010(2) b) 10001(2) - 11100(2) c) 10101(2) - 10111(2

MC0062-1.7 Binary Coded Decimal (BCD)

Numbering system

Binary Coded Decimal (BCD) Numbering system

The BCD code is also known as 8421 code. It is a 4 bit weighted code representing decimal

digits 0 to 9 with four bits of binary weights (23, 2

2, 2

1, 2

0). Please note that with four bits the

possible numbers of binary combinations are 24 = 16, out of which only first 10 combinations

were used. The codes 1010, 1011, 1100, 1101, 1110 and 1111 are not used.

BCD or 8421 code Decimal Number

0000 0

0001 1

0010 2

0011 3

0100 4

0101 5

0110 6

0111 7

Page 28: Computer organisation nd architecture

1000 8

1001 9

A given decimal number can be represented with equivalent BCD number by replacing the

individual decimal digit with its equivalent BCD code.

Example:

348.16 (10) = 0011 0100 1000 . 0001 0110 (BCD)

18 (10) = 0001 1000 (BCD)

9357 (10) = 1001 0011 0101 0111 (BCD)

BCD Addition

BCD codes are the 4 bit binary weighted code representation of decimal numbers. The addition

operation carried on decimal can be represented with BCD addition. Since BCD does not uses all

16 combinations possible from 4-bit representation, while addition performed on BCD

numbers, may result into invalid code words. The rule to be followed while BCD were added

directly are given as

1. Add the given two BCD numbers using the addition rules for binary addition

2. If the resultant 4-bit binary is less than 9 then it is a valid BCD code

3. If a 4-bit sum is greater than 9 then it is an invalid BCD code

4. If there is carry generated while adding the two 4-bit numbers the result is an invalid

sum.

5. For both the cases discussed in 3 and 4 add a BCD equivalent of 6 i.e. 0110(2) so that sum

skips all six invalid states and results into a valid BCD number.

Example: few examples for the generation of valid BCD codes during BCD addition

0011

+ 0101

3

+ 5

1000 0110 0111

+ 0001 0011 0010

867

+ 132

0100 0101 0010

+ 0100 0001 0110

452

+ 416

1000 8 1001 1001 1001 999 1000 0110 1000 868

Example: few examples for the generation of invalid BCD codes during BCD addition

1000

+ 0111

8

+ 7

Page 29: Computer organisation nd architecture

1111

+ 0110

Invalid BCD combination >9

Add 6

1 0101 Valid BCD number 15

1000

+ 1001

8

+ 9

1 0001

+ 0110

Invalid BCD combination , carry generated

Add 6

1 0111 Valid BCD number 17

Note: While carrying BCD addition as discussed in the examples above, if the answer has more

than one group of 4-bit combination, which is invalid (either invalid combination or due to carry

generation) 6 to be added to each group to get a valid BCD code.

Self Assessment Question 16:

Add the following BCD Numbers

a) 0100000 + 1001011 b) 01100100 + 00110011 c) 0111 + 0010 d) 1010 + 0111

Summary

• A binary number system has a base of two and consists of two digits (called bits) 1 and 0.

• A binary number is a weighted number with the weight of each whole number digit from

least significant (20) to most significant being an increasing positive power of two. The

weight of each fractional digit beginning at 2-1 is an increasing negative power of two.

• The 1’s complement of a binary number is derived by changing 1s to 0s and 0s to 1s

• The 2’s complement of a binary number is derived by adding 1 to the 1’s complement

• Binary subtraction can be accomplished by addition using the 1’s or 2’s complement

methods

• A decimal whole number can be converted to binary by using the sum-of-weights or by

repeated division by 2 method

• A decimal fraction can be converted to binary by using the sum-of-weights or by repeated

division by 2 method

• The octal number system has a base of eight and consists of eight digits (0 to 7)

• A decimal whole number can be converted to octal by using the repeated division-by-8

method

• Octal to binary conversion is accomplished by simply replacing each octal digit with its

three-bit binary equivalent. The process is reversed for binary-to-octal conversion

• The hexadecimal number system has a base of sixteen and consists of 16 digits and

characters 0 through 9 and A to F

• One hexadecimal digit represents a four-bit binary number and its primary usefulness is

simplifying bit patterns by making then easier to read

Page 30: Computer organisation nd architecture

• BCD represents each decimal digit by a four-bit binary number. Arithmetic operations

can be performed in BCD.

• The Main feature of the Gray-code is the single-bit change going from one number in

sequence to the next

Terminal Questions

1. Convert the following binary numbers to decimal

1. 11.001(2)

2. 1100(2)

3. 1111(2)

4. 1011.101(2)

5. 0.1101(2)

2. Convert the following decimal numbers to binary using sum-of weight and repeated

division methods

1. 40.345(10)

2. 143.7(10)

3. 467(10)

1. Convert the following octal number to decimal

1. 73.24(8)

2. 276(8)

3. 0.625(8)

4. 57.231(8)

2. Convert the octal numbers in question 3 into binary format

3. Convert the decimal numbers in question 2 into octal format

4. Convert the binary numbers in question 1 into octal format

5. Give the equivalent BCD representation for the decimal numbers given in question 2

6. Perform the BCD addition

1. 1001(2) + 0110(2)

2. 01010001(2) + 01011000(2)

3. 0111(2) + 0101(2)

4. 0101011100001(2) + 011100001000(2)

7. Perform the 1’s and 2’s complement to realize the binary subtraction.

1. 10011(2) – 10101(2)

2. 10010(2) – 11001(2)

1111000(2) – 1111111(2)

Unit 3 Combinational Logic

This unit mainly focuses on realization of combinational logic using basic gates, reduced

representation of combinational logic using basic gates, specific truth table realization, universal

Page 31: Computer organisation nd architecture

properties of NOR and NAND gates, canonical logic forms, sum of products (SOP) and product

of sum(POS) form representation.

MC0062(A)3.1 Introduction

Introduction

In unit 2, logic gates were studied on an individual basis and in simple combinations. When logic

gates where connected together to produce a specified output for certain specified combinations

of input variables, with no storage involved, the resulting network is called combinational logic.

In combinational logic, the output level is at all times dependent on the combination of input

levels. The chapter mainly focuses on realization of combinational logic using basic gates,

reduced representation of combinational logic using basic gates, specific truth table realization,

universal properties of NOR and NAND gates, canonical logic forms, sum of products (SOP) and

product of sum(POS) form representation.

Objectives:

By the end of this chapter, reader should know

• How to simplify the combinational logic expressions using Boolean rules and laws and

with the application of Demorgan’s theorem.

• How to realize the simplified expressions using basic logic gates.

• How to represent the logic expressions with the canonical forms such as sum of products

and product of sum forms.

• What are Universal gates and its application in the realization of simplified logic

functions?

• What are Timing diagrams and the concept of synchronization

• How to realize the combinational circuits from the specified truth table.

MC0062(A)3.2 Realization of switching

Realization of switching functions using logic gates

A given logic function can be realized with the combination of basic gates. Boolean laws and

rules are used to simplify and simplified realization of the same function with the basic gates are

shown here.

Example: Realize the given function using basic gates. Use Boolean rules

and laws to simplify the logic function and realize the minimized function using basic gates.

Page 32: Computer organisation nd architecture

Solution:

Direct realization: �

Simplifying using Boolean Algebra:

Page 33: Computer organisation nd architecture

Example: Realize the logic expression using basic gates.

Solution: Direct realization of the expression

Example: A logic function if defined by . Give the basic gate realization.

Simplify the logic function and represent with basic gates.

Solution: Direct realization of the function

Simplifying the expression using Boolean Laws

Page 34: Computer organisation nd architecture

Self Assessment Question: Use Boolean algebra to simplify the logic function and realize the

given function and minimized function using discrete gates.

Solution: i) Direct realization of the function

ii) Simplified realization of the function

MC0062(A)3.3 Canonical Logic Forms

Canonical Logic Forms

Page 35: Computer organisation nd architecture

The form of the Boolean expression does determine how many logic gates are used and what

types of gates are needed for the realization and their interconnection. The more complex an

expression, the more complex the circuit realization will be. Therefore an advantage of simplify

an expression is to have the simple gate network.

There are two representations in which a given Boolean expressions can be represented.

• Sum of Product form (SOP)

• Product of Sum form (POS)

Sum of Products Form

In Boolean algebra the product of two variables can be represented with AND function and sum

of any two variable can be represented with OR function. Therefore AND and OR functions are

defined with two or more input gate circuitries.

Sum of products (SOP) expression is two or more AND functions ORed together. The ANDed

terms in a SOP form are known as minterms.

Example:

Here in the first example the function is having 4 minterms and the second example has 3

minterms. One reason the sum of products is a useful form of Boolean expression, which is the

straightforward manner in which it can be implemented with logic gates. It is to be noted that the

corresponding implementation is always a 2-level gate network. i.e. the maximum number of

gates through which a signal must pass 2in going from an input to the output is two (excluding

inversions if any).

A most popular method of representation of SOP form is with the minterms. Since the minterms

are ORed, a summation notation with the prefix m is used to indicate SOP expression. If the

number of variables are used is n, then the minterms are notated with a numeric representation

starting from 0 to 2n.

Consider the above example, where the given logic expression can be represented in terms of

associated minterms. consists of 3 variables. Therefore minterms can be

represented with the associated 3-bit representation. Representation of minterms with 3-bit

binary and equivalent decimal number can be noted. , , ,

.

Page 36: Computer organisation nd architecture

There fore the logic function can b given as

Self Assessment Question: implement the SOP expression given by or

Product of Sum Form

Product of Sum (POS) expression is the ANDed representation of two or more OR functions.

The ORed terms in a POS form are known as maxterms.

Example:

Here in the first example the function is having 4 maxterms and the second example has 3

maxterms. This form is also useful in the straightforward implementation of Boolean expression

is with logic gates. It is to be noted that the corresponding implementation is always a 2-level

gate network. i.e. the maximum number of gates through which a signal must pass 2in going

from an input to the output is two (excluding inversions if any).

Page 37: Computer organisation nd architecture

Similar to SOP representation, a most popular method of representation of POS form is with the

maxterms. Since the maxterms are ANDed, a product notation with the prefix M is used. If the

number of variables are used is n, then the maxterms are notated with a numeric representation

starting from 0 to 2n.

Consider the above example, where the given logic expression can be represented in terms of

associated maxterms. it consists of 3 variables. Therefore maxterms

can be represented with the associated 3 bit representation. Representation of maxterms with 3-

bit binary and equivalent decimal number can be noted. , ,

.

There fore the logic function can b given as

Self Assessment Question: implement the SOP expression given by or

.

Page 38: Computer organisation nd architecture

MC0062(A)3.5 Timing Diagrams and

Synchronous Logic

Timing Diagrams and Synchronous Logic

In digital systems, a timing diagram shows the waveform appearing at several different points.

Timing diagram is plotted as a plot dependent of time axis (horizontal axis). All observed

waveforms were plotted with time axes are aligned. Therefore, it is possible at a particular

instant to determine the state of each waveform. The timing diagram mainly assists the study of

propagation delay in the gate circuitry.

A clock waveform is a rectangular pulse having HIGH and LOW representations. The basic

gates were studied in unit II with digital inputs. Consider these gates were studied with one of the

input is being digital input and the other being a clock waveform. The gates are said to be pulsed

or clocked. The study of gate circuitry with respect to the timing pulses is known as synchronous

logic circuits.

Gate Circuitry with timing pulses.

• NOT Gate

• AND Gate

Output of an AND gate is HIGH only when all inputs are HIGH at the same time.

Page 39: Computer organisation nd architecture

• OR Gate

The output of an OR gate is HIGH any time at least one of its inputs is HIGH. The output is LOW

only when all inputs are LOW at the same time.

• NAND Gate

The output of a NAND gate is LOW only when all inputs are HIGH at the same time.

Page 40: Computer organisation nd architecture

• NOR Gate

The output of a NOR gate is LOW any time at least one of its inputs is HIGH. The output is HIGH

only when all inputs are LOW at the same time.

Example: Determine the output waveform for the combinational circuit shown with the

indicated input waveforms.

Page 41: Computer organisation nd architecture

MC0062(A)3.6 Realization of Combinational

Realization of Combinational circuits from the truth table

The Logic functions were represented with the truth tables as discussed in unit II. To realize the

given logic functions, write down the combination of all logic functions in SOP form. A truth

table gives the logic entries for the all possible combination of inputs related to the output. The

output logic is TRUE for a specific input combination is represented with an entry ‘1′ and the

logic FALSE with an entry ‘0′.

Example: Design a logic circuit to implement the operations specified in the following truth

table.

Inputs Output

a b c f

0 0 0 0

0 0 1 0

0 1 0 0

0 1 1 1

1 0 0 0

1 0 1 1

1 1 0 1

1 1 1 0

Solution: From the truth table the function can be given in terms of minterms

Page 42: Computer organisation nd architecture

Summary

• There are two basic forms of Boolean expressions: the sum-of-products and the product-

of-sum

• Boolean expressions can be simplified using the algebraic method and are realizable

using discrete gates

• Any logic function can be represented with equivalently using a truth table.

• Truth table simplification can be done using Sum-of-product realization or with product

of sum realization

• Demorgan’s theorems are used to represent the function only with universal gates

Terminal Questions:

1. Realize the given function using basic gates. Use Boolean rules

and laws to simplify the logic function and realize the minimized function using basic

gates.

2. Realize the logic expression using basic gates.

3. Use Boolean algebra to simplify the logic function and realize the given function and

minimized function using discrete gates.

4. Implement the following SOP expression

1.

2.

3.

4. 5. Use Boolean algebra to simplify the logic function and realize the given function and

minimized function using discrete gates.

Page 43: Computer organisation nd architecture

1.

2.

6. Implement the following SOP expressions with discrete gates

1.

2.

3. 7. Give the NAND realization for the logic expressions given in question number 3 and 4.

8. Design a logic circuit to implement the operations specified in the following truth table.

Inputs Output

a b c f

0 0 0 1

0 0 1 0

0 1 0 0

0 1 1 1

1 0 0 1

1 0 1 1

1 1 0 1

1 1 1 0

MC0062(A)4.1 Introduction

Introduction

A given logic function can be realized with minimal gate logic. Boolean algebra and laws were

of great help to reduce the given expression into a minimal expression. But the simplification

process of the expression is being not a systematic method, it is not sure that the reduced

expression is the minimal expression in real sense or not.

Page 44: Computer organisation nd architecture

In this chapter different combination logic minimization methods were discussed. The most

preferred method is being the use of Karnaugh Map or also known as K – map. Here a basic

structure of K – map is dealt with two, three and four variable. Other method used is Qune –

McClusky method.

Objectives

By the end of this chapter, reader should be able to explain

• the concept of Karnaugh map and simplification of logic expression using Karnaugh

Map.

• How to group the adjacent cells in K-Map with two, three and four variable maps and to

solve the logic functions

• How logic expressions are simplified using Quine McClusky method

• What are the multiple output functions and how to simplify and to realize the same.

MC0062(A)4.2 Karnaugh Map or K – Map

Karnaugh Map or K – Map

The Karnaugh map provides a systematic procedure in the simplification of logic expression. It

produces the simplest SOP expression if properly used. An user is required to know the map

structure and associated mapping rules.

Karnaugh Map consists of an arrangement of cells. Each adjacent cells represents a particular

combination of variables in the product form. For an ‘n’ number of variables the total number of

combinations possible are 2n, hence Karnaugh Map consists of 2

n cells.

For example, with two input variables, there are four combinations. Therefore a four cell map

must be used.

Format of a two-variable Karnaugh – Map is shown in Figure 4.1. For the purpose of illustration

only the variable combinations are labeled inside the cells. In practice, the mapping of the

variables to a cell is such that the variable to the left of a row of cells applies to each cell in that

row. And the variable above a column of cells applies to each cell in that column.

Page 45: Computer organisation nd architecture

Similarly three variable and four variable Karnaugh Maps were shown in the Figure 4.2. A three

variable map consists of 23 = 8 cells and a four variable map consists of 2

4 = 16 cells. The value

of the minterms are indicated within the cell. Note that in a Karnaugh Map, the cells are arranged

such a way that there is only a one bit or one variable change between any two adjacent cells.

Karnaugh maps can also be used for five, six or more variables.

MC0062(A)4.3 Plotting a Boolean expression

Plotting a Boolean expression

Given a logic function get its sum – of – product (SOP) realization. Place 1 in each cell

corresponding to the term obtained in the logic function. And 0 in all other empty cells.

Example: Plot a two variable logic function

Figure 4.3: three variable and four variables Karnaugh Map.

Self Assessment Question: Plot a three variable logic function in a K – map

Page 46: Computer organisation nd architecture

Self Assessment Question: Plot a Four variable logic function in a K – map

MC0062(A)4.4 Logic expression

simplification

Logic expression simplification with grouping cells

Let us discuss the simplification procedure for the Boolean expressions. The procedure being

same irrespective of the dimensionality of K – map. A four variable K – map is used for the

discussion on grouping cells for expression minimizing process.

Grouping of adjacent cells are done by drawing a loop around them with the following

guidelines or rules.

• Rule 1: Adjacent cells are cells that differ by only a single variable.

• Rule 2: The 1s in the adjacent cells must be combined in groups of 1, 2, 4, 8, 16 so on

• Rule 3: Each group of 1s should be maximized to include the largest number of adjacent

cells as possible in accordance with rule 2.

• Rule 4: Every 1 on the map must be included in at least one group. There can be

overlapping groups, if they include non common 1s.

Simplifying the expression:

Page 47: Computer organisation nd architecture

• Each group of 1s creates a product term composed of all variables that appear in only one

form within the group

• Variables that appear both uncomplemented and complemented are eliminated.

• Final simplified expression is formed by summing the product terms of all the groups.

Example: Use Karnaugh Map to simplify the expression

Solution:

Example: reduce the following expression using Karnaugh Map

Solution:

Example: Using Karnaugh Map, Implement the simplified logic expression specified by the

truth table.

Page 48: Computer organisation nd architecture

Inputs Output

a b c f

0 0 0 1

0 0 1 0

0 1 0 0

0 1 1 0

1 0 0 1

1 0 1 0

1 1 0 1

1 1 1 1

Solution:

Example: A logic circuit has three inputs and one output terminals. Output is high when two or

more inputs are at high. Write the truth table and simplify using Karnaugh Map.

Inputs Output

a b c f

Page 49: Computer organisation nd architecture

0 0 0 0

0 0 1 0

0 1 0 0

0 1 1 1

1 0 0 0

1 0 1 1

1 1 0 1

1 1 1 1

Solution:

Simplified expression

Self Assessment Question:

1. Reduce the following expression using K – Map and implement using universal gate

1.

2. 2. Reduce using the K-map

1.

2.

3.

MC0062(A)4.5 Quine McClusky Method

Page 50: Computer organisation nd architecture

Quine McClusky Method

Quine McClusky method is known as tabular method, a more systematic method of minimizing

expressions of larger number of variables. Therefore its an edge over the disadvantage of

Karnaugh Map method were it supports a maximum of six variable. Qunie McClusky method is

very suitable for hand computation as well as for the soft program implementation.

Prime implicants

Solution of logical expression with Quine McClusky method involves in the computation of

prime implicants, from which minimal sum should be selected.

The procedure for the minimization of a logic expression is done as follows.

• Arrange all minterms in groups of the same number of 1s in their binary representations.

Start with the least number of 1s present in the number and continuing with increasing

number of 1s.

• Now compare each term of the lowest index group with every term in the succeeding

group. Whenever the two terms differ by one bit position, the two terms were combined

with (-) used in place of differing position.

• Place a tick mark next to the every term used while combining.

• Perform the combining operation till last group to complete the first iteration.

• Compare the terms generated with same procedure with dashed line mapping the dashed

line in two terms under comparison.

• Continue the process till no further combinations are possible.

• The terms which are not ticked constitute the prime implicants.

Prime implicant chart

The Prime implicant chart is a representation giving the relationship between the prime

implicants and the minterms constituting the logic expression. The prime implicant chart gives an

idea to have a set of minimum number of prime implicants which cover all minterms. Thus the

number of minimal set of prime implicants may be more than one which cover all minterms. To

find a subset of prime implicants which are essential part to cover all minterms or which are

found in all such subsets which covers the given minterms, are known as essential prime

implicants.

Thus the simplified expression for a given logic function consists of all essential

prime implicants and one or more prime implicants. In Prime implicant chart have all prime

implicants found in row wise and all minterms in column wise. Put a tick mark against the

minterms which are covered by individual prime implicants. Find the minterms which are

covered by only one prime implicant. These prime implicants will be the essential prime

implicants. After finished with finding all essential prime implicants, find the set of prime

implicants necessary to cover all other minterms.

Page 51: Computer organisation nd architecture

Example: Obtain the set of prime implicants for

Column 1 Column 2 Column 3

Min

terms

Binary

Designation

a

b

c

d

a

b

c

d

a

b

c

d

Grou

p 0 0

0

0

0

0

(0,1)

(0,8)

0

0

0

0

0

0

(0,1,8,9) –

0

0

V

Grou

p 1

1

8

0

1

0

0

0

0

1

0

(1,9)

(8,9)

1

0

0

0

0

1

Grou

p 2

6

9

0

1

1

0

1

0

0

1

(6,7)

(6,14)

(9,13)

0

1

1

1

1

1

0

0

1

X

(6,7,14,15

)

1

1

U

Grou

p 3

7

13

14

0

1

1

1

1

1

1

0

1

1

1

0

(7,15)

(13,15)

(14,15)

1

1

1

1

1

1

1

1

1

W

Grou

p 4 15

1

1

1

1

It is found that U, V, W and X are prime implicants. Now to find essential prime implicants from

Prime implicant chart.

√ √ √ √ √ √ √ √

Prime Implicants

↓ Minterms � 0 1 6 7 8 9 13 14 15

U 6,7,14,15 X X X X

V 0,1,8,9 X X X X

W 13,15 X X

Page 52: Computer organisation nd architecture

X 9,13 X X

In the column corresponding to minterms 0, 1, 6, 7, 8 there are only one entries and the prime

implicants covering them are U and V. Therefore U and V are considered as essential prime

implicants. Other than the above minterms they cover minterms 14 and 15 too. But minterm 13 is

not covered by these two essential prime implicants. Therefore along with U and V either W or

X can be used to represent the simplified Boolean expression.

Where �

Therefore the simplified logic expression can be given as

or

Example: Obtain the set of prime implicants for

Column 1 Column 2 Column 3

Min

terms

Binary

Designation

a

b

c

d

a

b

c

d

a

b

c

d

Grou

p 1

1

2

8

0

0

1

0

0

0

0

1

0

1

0

0

(1,3)

(1,5)

(1,9)

(2,3)

0

0

0

0

0

0

0

0

1

1

1

1

(1,3,5,7)

(1,5,9,13)

(2,3,6,7)

(8,9,12,13

)

0

0

1

0

1

0

1

1

Y

X

W

V

Page 53: Computer organisation nd architecture

(2,6)

(8,9)

(8,12)

0

1

1

0

1

0

0

0

0

Grou

p 2

3

5

6

9

12

0

0

0

1

1

0

1

1

0

1

1

0

1

0

0

1

1

0

1

0

√√

(3,7)

(5,7)

(5,13)

(6,7)

(9,13)

(12,13

)

0

0

0

1

1

1

1

1

1

1

0

1

0

0

1

1

1

1

(5,7,13,15

)

1

1

U

Grou

p 3

7

13

0

1

1

1

1

0

1

1

(7,15)

(13,15

)

1

1

1

1

1

1

Grou

p 4 15

1

1

1

1 √

It is found that U, V, W, X and Y are prime implicants. Now to find essential prime implicants

from Prime implicant chart.

√ √ √ √ √ √ √ √ √ √

Prime Implicants

↓ Minterms � 1 2 3 5 6 7 8 9 12 13 15

U 5,7,13,15 X X X X

V 8,9,12,13 X X X X

W 2,3,6,7 X X X X

X 1,5,9,13 X X X X

Y 1,3,5,7 X X X X

In the column corresponding to minterms 2, 6, 8 and 15 there are only one entries and the

prime implicants covering them are U, V and W. Therefore U, V and W are considered as

essential prime implicants. Other than the above minterms these essential prime implicants

cover additional minterms 3, 5, 7, 9, 12 and 13 too. But the minterm is not covered by the

essential prime implicants. Therefore along with U, V and W either X or Y can be used to cover

all minterms and represents the simplified Boolean expression.

Page 54: Computer organisation nd architecture

Where

Therefore the simplified logic expression can be given as

or

Self Assessment Question: Obtain the set of prime implicants for the following expression

MC0062(A)4.6 Multiple Output functions

Multiple Output functions

Till now the single valued expressions were realized using Boolean rules and simplification with

Karnaugh Map methods. In practical case the problems are involved with the design of more

than one output with the given inputs.

• Individual logic expression is simplified. Separate K – maps or Quine McClusky method

are used for simplification.

Page 55: Computer organisation nd architecture

• Both the expression uses same inputs and allowed to have same common minterms in

addition to the specific minterms for two.

Example: Simplify the given logic expressions with the given three inputs.

Solution:

Or

Page 56: Computer organisation nd architecture

Case i.) Output is with

Case ii.) Output is with

Case ii) has a common term . Therefore realization requires lesser number of gates

Case i.) Output is with

Case ii) Output is with

Example: Minimize and implement the following multiple output functions

Page 57: Computer organisation nd architecture

and

Solution: Note that the realization of multiple function involves SOP realization for function f1

and POS realization for function f2. K – map realization of functions for POS can be done by

having alternate SOP representation.

Page 58: Computer organisation nd architecture

Self Assessment Question: Minimize the following multiple output function using K – map

and

Summary

• Boolean expressions can be simplified using the algebraic method or the Karnaugh map

method.

• There are two basic forms of Boolean expression the sum of product form and product of

sum forms.

• SOPs can be solved with minterm entries into the K-map with 1s for the respective terms

• Grouping of the terms were defined with predefined logic. Grouping of two, four, eight

or sixteen cells can be done with the entries 1s.

• Simplified logic expression is written in SOP form and is realized with simple gate

circuitry

• POS from can also be solved with K-map

• Quine Mcclusky is an other method to simplify the logic expression with possible more

number of entries used.

• Essesntial prime implicants are found using prime implicant chart.

• Combinational logic expressions with multiple output function are realized using basic

gagtes.

Terminal Questions:

Page 59: Computer organisation nd architecture

1. Reduce the following expression using K – Map and implement using universal gate

a.

b.

2. Reduce using the K-map

a.

b.

c.

3. Obtain the set of prime implicants for the following expression

a.

b.

4. Minimize the following multiple output function using K – map

a.

b.

5. Minimize the following multiple output function using K – map

a. and

b.

Page 60: Computer organisation nd architecture

Unit 6 Latches and Flip Flops

This unit has more clear and complete coverage on latches and flip flops. The edge triggered,

master-slave flip flops were discussed. More emphasis is been given on D and JK flip-flops.

MC0062(A)6.1 Introduction

Introduction

Usually switching circuits are either combinational or sequential type. The switching circuits

studied till now are the combinational circuits whose output level at any instant of time

dependent on the present inputs because these circuits have no memory or storage. Where as in

sequential circuits the logic used is such that output depends not only on the present inputs but

also on the previous input/outputs. Thus concept of requirement of memory units in the logic is

to be studied.

A simple memory unit is the flip-flop. A flip-flop can be thought as an assembly of logic gates

connected such a way that it permits the information to be stored. Usually flip-flops used are

memory elements stores 1 bit of information over a specific time. Flip-flops forms the

fundamental components of shift registers and counters.

Objectives:

By the end of this chapter, reader should be able to explain

• The concept of basic Latch

• Active low and High concept used in Latches.

• What are the gated latches?

• What are the flip-flops and the concept of edge triggering

• Concept of the use of asynchronous inputs like PRESET and CLEAR.

• Concept of Master and Slave J-K flip-flop.

MC0062(A)6.2 Latches The S-R Latch

Latches: The S-R Latch

The latch is a bi-stable device. The term bi-stable refers with respect to the output of the device

that can reside in either of two states and a feedback mechanism is used. These are similar to the

flip-flops in that even flip-flops are bi-stable devices. The difference between the two is in the

method used for changing their output state.

It is known as SET-RESET (S-R) latch. It has two inputs labeled as S and R and two outputs

labeled as indicating a HIGH or 1 and indicating a LOW or 0. The two outputs and

Page 61: Computer organisation nd architecture

are complementary to each other. The figure 6.1 shows logic symbol and the table 6.1 gives the

truth table of a S-R latch.

Figure 6.1: S-R Latch Logic Symbol

Inputs Output Comments

S R

0 0

No change

0 1 0 1 RESET

1 0 1 0 SET

1 ? ? Invalid

Table 6.1: Truth table of S-R Latch

From table 6.1 when SET input or S is HIGH or 1, output SETs or becomes 1 (inturn

becomes 0) and when RESET input or R is HIGH output RESETs or becomes 0 (inturn

becomes 1). Thus the name S-R latch. If both the inputs S and R are LOW or 0 then the output

retain the previous state or there is no change in the output state compared to the previous state.

If both the inputs are HIGH or 1, then the output of the latch is unpredictable or indicates invalid

state.

Active HIGH S-R Latch (NOR gate S-R Latch)

An NOR gate active high S-R latch can be constructed and is shown in figure 6.2 which has two

cross connected or coupled NOR gate circuitry.

Figure 6.2 active HIGH S-R Latch

Page 62: Computer organisation nd architecture

Case i.) Assume the latch output is initially SET, = 1 and = 0. if the inputs are S = 0 and R

=0. The inputs of G1 are 0 and 0 therefore output retains at = 1. The inputs of G2 are 0 and 1

therefore its output is 0. Similarly if = 0 and = 1 initially and if the inputs are S = 0 and R

=0. The inputs of G2 are 0 and 0 therefore its output retains at = 1. The inputs of G1 are 0 and 1

therefore its output is 0.

Therefore when S = 0 and R = 0 the output of the latch retains the previous state without

any change, or no change in the output.

Case ii.) Assume the latch output is initially SET, = 1 and = 0 and if the inputs are S = 1 and

R =0 are applied to the latch. The inputs of G2 are 1 and 1, therefore its output is 0. The inputs

of G1 are 0 and 0 therefore its output is 1. Similarly if = 0 and = 1 initially and if the inputs

are S = 1 and R = 0. The inputs of G2 are 1 and 0 therefore its output = 0. The inputs of G1 are

0 and a 0 therefore output is 1.

Therefore when S = 1 and R = 0 the output of the latch SETs.

Case iii.) Assume the latch output is initially SET, = 1 and = 0 and if the inputs are S = 0 and

R =1 are applied to the latch. The inputs of G1 are 1 and 0, therefore its output is 0. The inputs

of G2 are 0 and 0 therefore its output is 1. Similarly if = 0 and = 1 initially and if the inputs

are S = 0 and R = 1. The inputs of G1 are 1 and 1 therefore its output is 0. The inputs of G2 are

0 and a 0 therefore output is 1.

Therefore when S = 0 and R = 1 the output of the latch RESETs.

Case iv.) If the inputs are S = 1 and R =1 the corresponding outputs are = 0 and = 0

which is an invalid combination

The operation of the active-HIGH NOR latch can be summarized as follows

1. SET = 0 and RESET = 0: has no effect on the output state from its previous state.

2. SET = 1 and RESET = 0: always sets the output = 1 and = 0

Page 63: Computer organisation nd architecture

3. SET = 0 and RESET = 1: always resets the output = 0 and = 1

4. SET = 1 and RESET = 1: the condition tries to set and reset the output of the latch at the

same time or output is unpredictable. This state is referred as invalid state.

Active Low S-R Latch ( NAND Gate S-R Latch)

A NAND gate active high S-R latch can be constructed and is shown in figure 6.3 which has two

cross connected or coupled NAND gate circuitry.

Figure 6.3: active LOW S-R Latch

Inputs Output

Comments S R

0 0 ? ? Invalid

0 1 1 0 SET

1 0 0 1 RESET

1 1

No change

Table 6.2: Truth table of S-R Latch

The operation of the active-LOW NAND latch can be summarized as follows

1. SET = 0 and RESET = 0: the condition tries to set and reset the output of the latch at the

same time or output is unpredictable. This state is referred as invalid state.

2. SET = 0 and RESET = 1: always sets the output = 1 and = 0

3. SET = 1 and RE

4. SET = 0: always resets the output = 0 and = 1

5. SET = 1 and RESET = 1: has no effect on the output state from its previous state.

Active HIGH NAND latch can be implemented whose circuit diagram is shown in figure 6.4 and

its truth table is shown in table 6.3

Page 64: Computer organisation nd architecture

Figure 6.4: active HIGH S-R Latch

Inputs Output Comments

S R

0 0

No change

0 1 0 1 RESET

1 0 1 0 SET

1 1 ? ? Invalid

Table 6.3: Truth table of S-R Latch

Self Assessment Question:

1. What do you mean by latch?

Explain the working of NAND gate based latch operation.

MC0062(A)6.3 Gated Latches

Gated Latches

The latches described in section 6.2 are known as asynchronous latches. The term asynchronous

represents the output changes the state any time with respect to the conditions on the input

terminals. To enable the control over the latch output gated latches are used. A control pin or an

enable pin EN is used which controls the output of the latch. The latches with the output

controlled with an enable input are known as gated latch or synchronous latch or flip-flop.

Gated S-R Latches

Page 65: Computer organisation nd architecture

When pin EN is HIGH the input S and R controls the output of the flip-flop. When EN pin is

LOW the inputs become ineffective and no state change in the output of the flip-flop. Since a

HIGH voltage level on the EN pin enables or controls the output of the latch, gated latches of

these types are also known as level triggered latches or flip-flops.

The logic symbol and the truth table of the gated latch are shown in figure 6.5 and table 6.4 and

the logic diagram of gated S-R flip-flop is shown in figure 6.6.

Figure 6.5: S-R Latch Logic Symbol

Fig. 6.6: Gated Latch or Flip-flop

Inputs EN

Output Comments

S R

0 0 High

No change

0 1 High 0 1 RESET

1 0 High 1 0 SET

1 1 High ? ? Invalid

Table 6.4: Truth table of gated S-R Latch

Page 66: Computer organisation nd architecture

Figure 6.7: Waveform of gated S-R latch

Gated D-Latch or D-flip-flop

The S-R flip-flop makes use of four input combinations and in many applications S = R = 0 and

S = R = 1 are never used. This represents that S and R are always complement to each other. The

input R can be obtained by inverting the input S.

The concept of D flip-flop has only one input data pin D along with the control logic over latch

i.e EN pin. With the enable pin EN is HIGH and D = 1, we have S = 1 and R = 0 which SETs the

input. With the enable pin EN is HIGH and D = 0, we have S = 0 and R = 1 which RESETs the

input.

The logic symbol and the truth table of the gated D-latch are shown in figure 6.8 and table 6.5

and the logic diagram of gated S-R flip-flop is shown in figure 6.9.

Figure 6.8: Gated D Latch Logic Symbol

Page 67: Computer organisation nd architecture

Fig 6.9: Gated D-Latch or D-flip-flop

Input EN

Output Comments

D

0 High 0 RESET

1 High 1 SET

Table 6.5: Truth table of gated D-Latch or D-flip-flop

Figure 6.10: Waveform of Gated D-Latch

Self Assessment Question:

1. What do you mean by gated latch?

Explain the working of gated D-latch.

MC0062(A)6.4 Edge triggered Flip-Flops

Edge triggered Flip-Flops

The logic systems or digital systems operate either synchronously or asynchronously. In

asynchronous systems when one or more input changes the output of the logic system changes.

In synchronous systems output changes with respect to a control or enable signal usually these

signals are known as clock signal.

A flip-flop circuit which uses the clock signal is known as clocked flip-flops. Many system

output changes occur when clock makes its transition. Clock transitions are defined as positive

transition when clock output changes from 0 to 1 and negative transition when clock output

changes from 1 to 0. The system outputs make changes during either of these transitions are

known as edge triggered systems. Edge triggering is also known as dynamic triggering.

Page 68: Computer organisation nd architecture

Flip-flops whose output changes during positive transition of the clock are known as positive

edge triggered flip-flop and the flip-flops which change its output during negative transition of

the clock are known as negative edge triggered flip-flop.

Positive edge triggering is indicated by a triangle at the clock terminal and negative edge

triggering is indicated by a triangle with a bubble at the clock terminal. There are three basic

types of edge-triggered flip-flops. S-R flip-flop, J-K flip-flop and D flip-flops.

Edge triggered S-R Flip-Flop (S-R FF)

Figure 6.11 and figure 6.12 indicates the positive edge triggered and negative edge triggered S-R

flip-flops. Figure 6.13 gives the simplified circuitry of edge triggered S-R FF. The S and R

inputs are known as synchronous control inputs. Without a clock pulse these inputs cannot

change the output state. The table 6.6 and table 6.7 give the truth table for S-R FF for positive

and negative edge triggering.

Fig. 6.13: edge triggered S-R flip-flop

Inputs Clock Clk

Output Comments

S R

0 0 ↑

No change

Page 69: Computer organisation nd architecture

0 1 ↑ 0 1 RESET

1 0 ↑ 1 0 SET

1 1 ↑ ? ? Invalid

Table 6.6: Truth table for positive edge triggered S-R flip-flop

Inputs Clock Clk

Output Comments

S R

0 0 ↓

No change

0 1 ↓ 0 1 RESET

1 0 ↓ 1 0 SET

1 1 ↓ ? ? Invalid

Table 6.7: Truth table for negative edge triggered S-R flip-flop

Figure 6.14: waveforms for positive edge triggered S-R flip-flop

Edge triggered D-Flip-Flop (D-FF)

Figure 6.11 and figure 6.12 indicates the positive edge triggered and negative edge triggered D

flip-flops. Figure 6.15 gives the simplified circuitry of edge triggered D FF. There is only one

input D. Without a clock pulse the input cannot change the output state. The table 6.8 and table

6.9 give the truth table for D-FF for positive and negative edge triggering.

Page 70: Computer organisation nd architecture

Figure 6.15: edge triggered D-FF

Input Clock

Clk

Output Comments

D

0 ↑ 0 RESET

1 ↑ 1 SET

Table 6.8: Truth table for positive edge triggered D- flip-flop

Input Clock Clk

Output Comments

D

0 ↓ 0 RESET

1 ↓ 1 SET

Table 6.9: Truth table for negative edge triggered D- flip-flop

Figure 6.16: waveforms for negative edge triggered D- flip-flop

Edge triggered J-K Flip-Flop (J-K FF)

Page 71: Computer organisation nd architecture

Figure 6.11 and figure 6.12 indicate the positive edge triggered and negative edge triggered J-K

flip-flops. Figure 6.17 gives the simplified circuitry of edge triggered J-K FF. These are similar to

S-R FFs except that J-K FFs has no invalid state. Therefore J-K FFs are versatile and mostly used

FFs. Without a clock pulse the inputs J and K cannot change the output state. The table 6.10 and

table 6.11 give the truth table for J-K FF for positive and negative edge triggering.

Figure 6.17: edge triggered J-K FF

1. When J = 0 and K = 0: no change of state when a clock pulse is applied

2. When J = 0 and K = 1: output resets on positive/negative going edge of the clock pulse applied.

3. When J = 1 and K = 0: output sets on positive/negative going edge of the clock pulse applied.

4. When J = 1 and K = 1: output toggles between two states 0 and 1 for every positive/negative

going edge of the clock pulse applied.

Inputs

Clock Clk

Output

Comments J K

0 0 ↑

No change

0 1 ↑ 0 1 RESET

1 0 ↑ 1 0 SET

1 1 ↑

Toggle

Table 6.10: Truth table for positive edge triggered J-K flip-flop

Page 72: Computer organisation nd architecture

Table 6.11: Truth table for negative edge triggered J-K flip-flop

Figure 6.23: waveforms for negative edge triggered J-K flip-flop

Self Assessment Question:

1. What do you mean by level triggered FF and an edge triggered FF.

Explain the working of positive and negative edge triggered J-K flip-flop.

MC0062(A)6.5 Asynchronous inputs

PRESET and CLEAR

Asynchronous inputs: PRESET and CLEAR

Page 73: Computer organisation nd architecture

Edge triggered or synchronous FFs were studied in section 6.3 and 6.4 were the S-R, D and J-K

inputs are called as synchronous inputs. The effect of these signals on the output is

synchronised with the clock pulse or the control pulse.

ICs consistis of one or more asynchronous inputs which work independently of the synchronous

and clock inputs. These asynchronous inputs are used to control the output of a given flip-flop

to PRESET (priorly set to 1) or to CLEAR (priorly set to 0). Usually active low PRESET (PRE pin)

and active low CLEAR (CLR pin) are used. Figure 6.19 gives the logic symbol of J-K FF with active

LOW PRESET and active LOW CLEAR pins and the truth table 6.12 gives its function.

Figure 6.19: negative edge triggered J-K FF with active low PRESET and CLEAR

Figure 6.20: Logic diagram of edge triggered J-K FF with active low PRESET and CLEAR

1. PRE = 0 and CLR = 0 not used

2. PRE = 0 and CLR = 1 is used to PRESET or SET the output to 1.

3. PRE = 1 and CLR = 0 is used to CLEAR or RESET the output to 0.

PRE = 1 and CLR = 1 is used to have clocked or edge triggered operation of FF.

MC0062(A)6.6 Master-Slave J-K Flip Flop

Master-Slave J-K Flip Flop

Master-slave FFs were developed to make the synchronous operation more predictable. A known

time delay is introduced between the time that the FF responds to a clock pulse and the time

Page 74: Computer organisation nd architecture

response appears at its output. It is also known as pulse triggered flip-flop due to the fact that the

length of the time required for its output to change state equals the width of one clock pulse.

A master-slave FF actually consists of two FFs. One is known as master and the other as slave.

Control inputs are applied to the master FF prior to the clock pulse. On the rising edge of the

clock pulse output of the master is defined by the control inputs. The falling edge of the clock

pulse, the state of the master is transferred to the slave and output of the slave are taken as and

. Note that the requirement in the master-slave that the input must be held stable while clock is

HIGH. Figure 6.21 indicates the logic diagram of J-K master slave FF. Truth table is shown in

table 6.12

Figure 6.21: Logic diagram of JK Master Slave FFs

Table 6.12: Truth table for negative edge triggered J-K flip-flop

Page 75: Computer organisation nd architecture

Figure 6.22: waveforms for master slave J-K flip-flop

T – Flip-Flop

The concept of J-K flip-flop with both J = 1 and K = 1 leads the output to toggle between the two

possible states 0 and 1. Thus the concept of Toggle flip-flop or T flip-flop is to have both J = 1

and K = 1 all time or connect both J and K to HIGH all time and apply the clock pulse. Figure

6.23 shows the logic block diagram of the T – flip-flop. The table 6.13 shows the truth table of T

– flip-flop.

Table 6.13: Truth table of T flip-flop

Self Assessment Question

• Explain the working of Toggle flip flop

• Give the timing diagram of the toggle flop flop

Summary

• Latches are bistable elements whose state normally depends on asynchronous inputs.

Page 76: Computer organisation nd architecture

• Edge triggered flip-flops are bistable elements with synchronous inputs whose state

depends on the inputs only at the triggering transition of a clock pulse.

• Changes in the output of the edge triggered flip-flops occur at the triggering transition of

the clock.

• Pulse triggered or master-slave flip-flops are bistable elements with synchronous inputs

whose state depends on the inputs at the leading edge of the clock pulse, but whose

output is postponed and does not reflect the internal state until the trailing clock edge.

• The synchronous inputs should not be allowed to change while the clock is HIGH.

Terminal Question

1. What is latch and Flip flop

2. Explain the difference between synchronous and asynchronous latches

3. Describe the main features of gated S-R latch and edge triggered S-R flip-flop operations.

4. Explain the concept of toggling in J-K FF.

5. Describe the operation of master – slave concept of JK flip flop.

MC0062(A)6.6 Master-Slave J-K Flip Flop

Master-Slave J-K Flip Flop

Master-slave FFs were developed to make the synchronous operation more predictable. A known

time delay is introduced between the time that the FF responds to a clock pulse and the time

response appears at its output. It is also known as pulse triggered flip-flop due to the fact that the

length of the time required for its output to change state equals the width of one clock pulse.

A master-slave FF actually consists of two FFs. One is known as master and the other as slave.

Control inputs are applied to the master FF prior to the clock pulse. On the rising edge of the

clock pulse output of the master is defined by the control inputs. The falling edge of the clock

pulse, the state of the master is transferred to the slave and output of the slave are taken as and

. Note that the requirement in the master-slave that the input must be held stable while clock is

HIGH. Figure 6.21 indicates the logic diagram of J-K master slave FF. Truth table is shown in

table 6.12

Page 77: Computer organisation nd architecture

Figure 6.21: Logic diagram of JK Master Slave FFs

Table 6.12: Truth table for negative edge triggered J-K flip-flop

Figure 6.22: waveforms for master slave J-K flip-flop

T – Flip-Flop

The concept of J-K flip-flop with both J = 1 and K = 1 leads the output to toggle between the two

possible states 0 and 1. Thus the concept of Toggle flip-flop or T flip-flop is to have both J = 1

and K = 1 all time or connect both J and K to HIGH all time and apply the clock pulse. Figure

6.23 shows the logic block diagram of the T – flip-flop. The table 6.13 shows the truth table of T

– flip-flop.

Page 78: Computer organisation nd architecture

Table 6.13: Truth table of T flip-flop

Self Assessment Question

• Explain the working of Toggle flip flop

• Give the timing diagram of the toggle flop flop

Summary

• Latches are bistable elements whose state normally depends on asynchronous inputs.

• Edge triggered flip-flops are bistable elements with synchronous inputs whose state

depends on the inputs only at the triggering transition of a clock pulse.

• Changes in the output of the edge triggered flip-flops occur at the triggering transition of

the clock.

• Pulse triggered or master-slave flip-flops are bistable elements with synchronous inputs

whose state depends on the inputs at the leading edge of the clock pulse, but whose

output is postponed and does not reflect the internal state until the trailing clock edge.

• The synchronous inputs should not be allowed to change while the clock is HIGH.

Terminal Question

1. What is latch and Flip flop

2. Explain the difference between synchronous and asynchronous latches

3. Describe the main features of gated S-R latch and edge triggered S-R flip-flop operations.

4. Explain the concept of toggling in J-K FF.

5. Describe the operation of master – slave concept of JK flip flop.

Unit 9 Shift Registers 6. Shift registers, like counters, are a form of sequential logic. This unit describes the

different shift register application like serial t o parallel conversion, parallel to serial

conversion, ring counter and Johnson counter

MC0062(A)9.1 Introduction

Page 79: Computer organisation nd architecture

Introduction

Shift registers, like counters, are a form of sequential logic. Sequential logic, unlike

combinational logic is not only affected by the present inputs, but also, by the prior history. In

other words, sequential logic remembers past events.

Shift registers produce a discrete delay of a digital signal or waveform. A waveform

synchronized to a clock, a repeating square wave, is delayed by “n” discrete clock times, and

where “n” is the number of shift register stages. Thus, a four stage shift register delays “data in”

by four clocks to “data out”. The stages in a shift register are delay stages, typically type “D”

Flip-Flops or type “JK” Flip-flops.

Formerly, very long (several hundred stages) shift registers served as digital memory. This

obsolete application is reminiscent of the acoustic mercury delay lines used as early computer

memory. Serial data transmission, over a distance of meters to kilometers, uses shift registers to

convert parallel data to serial form. Serial data communications replaces many slow parallel data

wires with a single serial high speed circuit. Serial data over shorter distances of tens of

centimeters, uses shift registers to get data into and out of microprocessors. Numerous

peripherals, including analog to digital converters, digital to analog converters, display drivers,

and memory, use shift registers to reduce the amount of wiring in circuit boards.

Some specialized counter circuits actually use shift registers to generate repeating waveforms.

Longer shift registers, with the help of feedback generate patterns so long that they look like

random noise, pseudo-noise.

Objective:

In this chapter you will learn

• How a register stores data

• The basic form of data movement in shift registers

• How data movement is controlled

• The operation of serial in-serial out, serial in-parallel out, parallel in-serial out, parallel

in-parallel out shit registers

• Working of a bidirectional shift register

• How to construct a ring counter

• Working of IC 74LS395

MC0062(A)9.2 Shift Register Classification

Shift Register Classification

Basic shift registers are classified by structure according to the following types:

• Serial-in/serial-out

Page 80: Computer organisation nd architecture

• Serial-in/parallel-out

• Parallel-in/serial-out

• Universal parallel-in/parallel-out

• Ring counter

Figure 9.1: 4-stage Serial-in, serial-out shift register

In figure 9.1 we show a block diagram of a serial-in/serial-out shift register, which are 4-stages

long. Data at the input will be delayed by four clock periods from the input to the output of the

shift register.

Data at “data in”, above, will be present at the Stage A output after the first clock pulse. After the

second pulse stage A data is transferred to stage B output and “data in” is transferred to stage A

output. After the third clock, stage C is replaced by stage B; stage B is replaced by stage A; and

stage A is replaced by “data in”. After the fourth clock, the data originally present at “data in” is

at stage D, “output”. The “first in” data is “first out” as it is shifted from “data in” to “data out”.

Figure 9.2: 4-stage Parallel-in, serial-out shift register

Data is loaded into all stages at once of a parallel-in/serial-out shift register. The data is then

shifted out via “data out” by clock pulses. Since a 4- stage shift register is shown figure 9.2, four

clock pulses are required to shift out all of the data. The stage D data will be present at the

“data out” up until the first clock pulse; stage C data will be present at “data out” between the

first clock and the second clock pulse; stage B data will be present between the second clock

and the third clock; and stage A data will be present between the third and the fourth clock.

After the fourth clock pulse and thereafter, successive bits of “data in” should appear at “data

out” of the shift register after a delay of four clock pulses.

Page 81: Computer organisation nd architecture

Figure 9.3: 4-stage serial-in, parallel-out shift register

In figure 9.3, four data bits will be shifted in from “data in” by four clock pulses and be available

at QA through QD for driving external circuitry such as LEDs, lamps, relay drivers, and horns.

After the first clock, the data at “data in” appears at QA. After the second clock, The old QA data

appears at QB; QA receives next data from “data in”. After the third clock, QB data is at QC. After

the fourth clock, QC data is at QD. This sat the data first present at “data in”. The shift register

should now contain four data bits.

Figure 9.4: 4-stage parallel-in, parallel-out shift register

A parallel-in/parallel-out shift register combines the function of the parallel-in, serial-out shift

register with the function of the serial-in, parallel-out shift register to yields the universal shift

register.

Data presented at DA through DD is parallel loaded into the registers. This data at QA through QD

may be shifted by the number of pulses presented at the clock input. The shifted data is available

at QA through QD. The “mode” input, which may be more than one input, controls parallel

loading of data from DA through DD, shifting of data, and the direction of shifting. There are shift

registers which will shift data either left or right.

Page 82: Computer organisation nd architecture

Figure 9.5: Ring Counter

If the serial output of a shift register is connected to the serial input, data can be perpetually

shifted around the ring as long as clock pulses are present. If the output is inverted before being

fed back as shown above, we do not have to worry about loading the initial data into the “ring

counter”.

Self assessment questions

1. Explain how a flip-flop can store a data bit

2. how many clock pulses are required to shift a byte of data into and out of an eight-bit

serial in-serial out shift register

3. how many clock pulses are required to shift a byte of data into and out of an eight-bit

serial in-parallel out shift register

MC0062(A)9.4 Serial In, Parallel out

Serial In, Parallel out

A serial-in/parallel-out shift register is similar to the serial-in/ serial-out shift register in that it

shifts data into internal storage elements and shifts data out at the serial-out, data-out, pin. It is

different in that it makes all the internal stages available as outputs. Therefore, a serial-

in/parallel-out shift register converts data from serial format to parallel format. If four data bits

are shifted in by four clock pulses via a single wire at data-in, below, the data becomes available

simultaneously on the four Outputs QA to QD after the fourth clock pulse.

Figure 9.11: 4-stage serial-in, parallel-out shift register

The practical application of the serial-in/parallel-out shift register is to convert data from serial

format on a single wire to parallel format on multiple wires. Perhaps, we will illuminate four

LEDs (Light Emitting Diodes) with the four outputs (QA QB QC QD).

Page 83: Computer organisation nd architecture

Figure 9.12: 4-stage serial-in, parallel-out shift register using D Flip Flops

The above details of the serial-in/parallel-out shift register are fairly simple. It looks like a serial-

in/ serial-out shift register with taps added to each stage output. Serial data shifts in at SI (Serial

Input). After a number of clocks equal to the number of stages, the first data bit in appears at SO

(QD) in the above figure. In general, there is no SO pin. The last stage (QD above) serves as SO

and is cascaded to the next package if it exists.

If a serial-in/parallel-out shift register is so similar to a serial-in/ serial-out shift register, why do

manufacturers bother to offer both types? Why not just offer the serial-in/parallel-out shift

register? They actually only offer the serial-in/parallel-out shift register, as long as it has no more

than 8-bits. Note that serial-in/ serial-out shift registers come in bigger than 8-bit lengths of 18 to

64-bits. It is not practical to offer a 64-bit serial-in/parallel-out shift register requiring that many

output pins. See waveforms below for above shift register.

Figure 9.13: Timing diagram of 4-stage serial-in, parallel-out shift register

Page 84: Computer organisation nd architecture

The shift register has been cleared prior to any data by , an active low signal, which clears all

type D Flip-Flops within the shift register. Note the serial data 1011 pattern presented at the SI

input. This data is synchronized with the clock CLK. This would be the case if it is being shifted

in from something like another shift register, for example, a parallel-in/ serial-out shift register

(not shown here). On the first clock at t1, the data 1 at SI is shifted from D to Q of the first shift

register stage. After t2 this first data bit is at QB. After t3 it is at QC. After t4 it is at QD. Four

clock pulses have shifted the first data bit all the way to the last stage QD. The second data bit a 0

is at QC after the 4th clock. The third data bit a 1 is at QB. The fourth data bit another 1 is at QA.

Thus, the serial data input pattern 1011 is contained in (QD QC QB QA). It is now available on the

four outputs.

It will available on the four outputs from just after clock t4 to just before t5. This parallel data

must be used or stored between these two times, or it will be lost due to shifting out the QD stage

on following clocks t5 to t8 as shown above in figure 9.13.

MC0062(A)9.5 Parallel In, Parallel out Shift

Register

Parallel In, Parallel out Shift Register

The purpose of the parallel-in/ parallel-out shift register is to take in parallel data, shift it, then

output it as shown below. A universal shift register is a do-everything device in addition to the

parallel-in/ parallel-out function.

Figure 9.14: 4-stage parallel-in, parallel-out shift register

Above we apply four bit of data to a parallel-in/ parallel-out shift register at DA DB DC DD. The

mode control, which may be multiple inputs, controls parallel loading vs shifting. The mode

control may also control the direction of shifting in some real devices. The data will be shifted

one bit position for each clock pulse. The shifted data is available at the outputs QA QB QC QD.

The “data in” and “data out” are provided for cascading of multiple stages. Though, above, we

Page 85: Computer organisation nd architecture

can only cascade data for right shifting. We could accommodate cascading of left-shift data by

adding a pair of left pointing signals, “data in” and “data out”, above.

Self assessment questions

1. Give the timing diagram of a four bit serial in-serial out shift register with data input is

being 1011.

2. The binary number 10110101 is serially shifted into an eight-bit parallel out shift register

that has an initial content of 11100100. what are the Q outputs after two clock pulses?

After four clock pulses? After eight clock pulses?

MC0062(A)9.6 74LS395 – A Universal Shift

Register

74LS395 – A Universal Shift Register

The internal details of a right shifting parallel-in/ parallel-out shift register are shown below.

The tri-state buffers are not strictly necessary to the parallel-in/ parallel-out shift register, but

are part of the real-world device shown below.

Figure 9.15: Universal shift register 74LS395 internal structure

The 74LS395 so closely matches our concept of a hypothetical right shifting parallel-in/ parallel-

out shift register that we use an overly simplified version of the data sheet details above.

controls the AND-OR multiplexer at the data input to the FF’s. If =1, the upper

four AND gates are enabled allowing application of parallel inputs DA DB DC DD to the four FF

data inputs. Note the inverter bubble at the clock input of the four FFs. This indicates that the

74LS395 clocks data on the negative going clock, which is the high to low transition. The four

bits of data will be clocked in parallel from DA DB DC DD to QA QB QC QD at the next negative

going clock. In this “real part”, must be low if the data needs to be available at the actual

output pins as opposed to only on the internal FFs.

Page 86: Computer organisation nd architecture

The previously loaded data may be shifted right by one bit position if =0 for the

succeeding negative going clock edges. Four clocks would shift the data entirely out of our 4-bit

shift register. The data would be lost unless our device was cascaded from to SER of another

device.

Above, a data pattern is presented to inputs DA DB DC DD. The pattern is loaded to QA QB QC QD.

Then it is shifted one bit to the right. The incoming data is indicated by X, meaning that we do

no know what it is. If the input (SER) were grounded, for example, we would know what data

(0) was shifted in. Also shown, is right shifting by two positions, requiring two clocks.

Serial in, Serial Out Right Shift Operation of 74LS395

Figure 9.16: Right shift operation of 74LS395

The above figure serves as a reference for the hardware involved in right shifting of data. It is too

simple to even bother with this figure, except for comparison to more complex figures to follow.

Serial in, Serial Out Left Shift Operation of 74LS395

Page 87: Computer organisation nd architecture

Figure 9.16: Left shift operation of 74LS395

If we need to shift left, the FFs need to be rewired. Compare to the previous right shifter. Also,

SI and SO have been reversed. SI shifts to QC. QC shifts to QB. QB shifts to QA. QA leaves on the

SO connection, where it could cascade to another shifter SI. This left shift sequence is

backwards from the right shift sequence.

MC0062(A)9.7 Ring counters

Ring counters

If the output of a shift register is fed back to the input, a ring counter results. The data pattern

contained within the shift register will recirculate as long as clock pulses are applied. For

example, the data pattern will repeat every four clock pulses in the figure below. However, we

must load a data pattern. All 0’s or all 1’s doesn’t count.

Figure 9.17: Ring Counter

Page 88: Computer organisation nd architecture

We make provisions for loading data into the parallel-in/ serial-out shift register configured as a

ring counter below. Any random pattern may be loaded. The most generally useful pattern is a

single 1.

Figure 9.17: Parallel – in, serial – out shift register configured as Ring Counter

Loading binary 1000 into the ring counter, above, prior to shifting yields a viewable pattern. The

data pattern for a single stage repeats every four clock pulses in our 4-stage example. The

waveforms for all four stages look the same, except for the one clock time delay from one stage

to the next. See figure 9.18.

Figure 9.18: Timing diagram of Ring counter with 1000 loaded

The circuit above is a divide by 4 counter. Comparing the clock input to any one of the outputs,

shows a frequency ratio of 4:1. How may stages would we need for a divide by 10 ring counter?

Ten stages would recirculate the 1 every 10 clock pulses.

Self assessment questions

1. Explain the working of a ring counter?

2. write the sequence of states for an eight bit ring counter with 1s in the first and fourth

stage and 0s in the rest.

MC0062(B)5.1 Introduction

Page 89: Computer organisation nd architecture

Introduction

In this unit we focus on the most complex aspect of ALU and control unit which are the main

components of the processing unit. We have seen in the preceding units the task that CPU must

do for execution of a instruction. That is it fetch instruction, interprets it, fetches data, Processes

data, and finally writes result into appropriate location. Internal CPU bus is needed to transfer

data between the various registers and the ALU, because the ALU in fact operates only on data

in the internal memory of CPU that is registers.

Objectives

By the end of Unit 5, the learners should be able to:

1. Discuss the different number representations.

2. Compute the addition of signed and unsigned integers.

3. Compute the addition of floating point numbers

4. Compute the multiplication and division of signed and unsigned integers.

MC0062(B)5.2 Arithmetic Logic

Arithmetic Logic Unit

The ALU is the part of the CPU that actually performs arithmetic and logical operations on data.

All of the other elements of the computer system – control unit, registers, memory, I/O – are

there mainly to bring data into ALU for it to process and then take the results back out.

The inputs and outputs of ALU is as shown in figure 5.1. The inputs to the ALU are the control

signals generated by the control unit of CPU, and the registers of the CPU where the operands

for the manipulation of data are stored. The output is a register called status word or flag register

which reflects the result and the registers of the CPU where the result can be stored. Thus data

are presented to the ALU in registers, and the results of an operation are also stored in registers.

These registers are connected by signal paths to the ALU. ALU does not directly interact with

memory or other parts of the system (e.g. I/O modules), it only interacts directly with registers.

An ALU like all other electronic components of computer is based on the use of simple digital

devices that store binary digits and perform boolean logic operations.

The control unit is responsible for moving data to memory or I/O modules. Also it is control unit

that signals all the operations that happen in the CPU. The operations, functions and

implementation of Control Unit will be discussed in the eighth unit.

Page 90: Computer organisation nd architecture

In this unit we will concentrate on the ALU. An important part of the use of logic circuits is for

computing various mathematical operations such as addition, multiplication, trigonometric

operations, etc. Hence we will be discussing the arithmetic involved using ALU.

First before discussing the computer arithmetic we must have a way of representing numbers as

binary data.

Self test Question 5.2 ����

1. The _______ is the part of the CPU that actually performs arithmetic and logical

operations on data.

2. _________ has to be one of the output of ALU.

3. Interact with memory state true or false.

4. ALU only interacts directly with _______ .

Answers to self test 5.2

1. ALU

2. flag register

3. true

4. registers

MC0062(B)5.3 Number Representations

Number Representations

Computers are built using logic circuits that operate on information represented by two valued

electrical signals. We label the two values as 0 and 1; and we define the amount of information

represented by such a signal as a bit of information, where bit stands for binary digit. The most

natural way to represent a number in a computer system is by a string of bits, called a binary

number. A text character can also be represented by a string of bits called a character code. We

will first describe binary number representations and arithmetic operations on these numbers,

and then describe character representations.

Non-negative Integers

The easiest numbers to represent are the non-negative integers. To see how this can be done,

recall how we represent number in the decimal system. A number such as 2034 is interpreted as:

2*103 + 0*10

2 + 3*10

1 + 4*10

0

But there is nothing special with the base 10, so we can just as well use base 2. In base 2, each

digit value is either 0 or 1, which we can represent for instance by false and true, respectively.

Page 91: Computer organisation nd architecture

In fact, we have already hinted at this possibility, since we usually write 0, and 1 instead of false

and true.

All the normal algorithms for decimal arithmetic have versions for binary arithmetic, except that

they are usually simpler.

Negative Integers

Things are easy as long as we stick to non-negative integers. They become more complicated

when we want to represent negative integers as well.

In binary arithmetic, we simply reserve one bit to determine the sign. In the circuitry for

addition, we would have one circuit for adding two numbers, and another for subtracting two

numbers. The combination of signs of the two inputs would determine which circuit to use on the

absolute values, as well as the sign of the output.

While this method works, it turns out that there is one that is much easier to deal with by

electronic circuits. This method is called the ‘two’s complement’ method. It turns out that with

this method, we do not need a special circuit for subtracting two numbers.

In order to explain this method, we first show how it would work in decimal arithmetic with

infinite precision. Then we show how it works with binary arithmetic, and finally how it works

with finite precision.

Infinite-Precision Ten’s Complement

Imagine the odometer of an automobile. It has a certain number of wheels, each with the ten

digits on it. When one wheel goes from 9 to 0, the wheel immediately to the left of it, advances

by one position. If that wheel already showed 9, it too goes to 0 and advances the wheel to its

left, etc. Suppose we run the car backwards. Then the reverse happens, i.e. when a wheel goes

from 0 to 9, the wheel to its left decreases by one.

Now suppose we have an odometer with an infinite number of wheels. We are going to use this

infinite odometer to represent all the integers.

When all the wheels are 0, we interpret the value as the integer 0.

A positive integer n is represented by an odometer position obtained by advancing the rightmost

wheel n positions from 0. Notice that for each such positive number, there will be an infinite

number of wheels with the value 0 to the left.

A negative integer n is represented by an odometer position obtained by decreasing the rightmost

wheel n positions from 0. Notice that for each such positive number, there will be an infinite

number of wheels with the value 9 to the left.

Page 92: Computer organisation nd architecture

In fact, we don’t need an infinite number of wheels. For each number only a finite number of

wheels is needed. We simply assume that the leftmost wheel (which will be either 0 or 9) is

duplicated an infinite number of times to the left.

While for each number we only need a finite number of wheels, the number of wheels is

unbounded, i.e., we cannot use a particular finite number of wheels to represent all the numbers.

The difference is subtle but important (but perhaps not that important for this particular course).

If we need an infinite number of wheels, then there is no hope of ever using this representation in

a program, since that would require an infinite-size memory. If we only need an unbounded

number of wheels, we may run out of memory, but we can represent a lot of numbers (each of

finite size) in a useful way. Since any program that runs in finite time only uses a finite number

of numbers, with a large enough memory, we might be able to run our program.

Now suppose we have an addition circuit that can handle nonzero integers with an infinite

number of digits. In other words, when given a number starting with an infinite number of 9s, it

will interpret this as an infinitely large positive number, whereas our interpretation of it will be a

negative number. Let us say, we give this circuit the two numbers …9998 (which we interpret as

-2) and …0005 (which we interpret as +5). It will add the two numbers. First it adds 8 and 5

which gives 3 and a carry of 1. Next, it adds 9 and the carry 1, giving 0 and a carry of 1. For all

remaining (infinitely many) positions, the value will be 0 with a carry of 1, so the final result is

…0003. This result is the correct one, even with our interpretation of negative numbers. You

may argue that the carry must end up somewhere, and it does, but in infinity. In some ways, we

are doing arithmetic modulo infinity.

Some implementations of some programming languages with arbitrary precision integer

arithmetic (Lisp for instance) use exactly this representation of negative integers.

Finite-Precision Ten’s Complement

What we have said in the previous section works almost as well with a fixed bounded number of

odometer wheels. The only problem is that we have to deal with overflow and underflow.

Suppose we have only a fixed number of wheels, say 3. In this case, we shall use the convention

that if the leftmost wheel shows a digit between 0 and 4 inclusive, then we have a positive

number, equal to its representation. When instead the leftmost wheel shows a digit between 5

and 9 inclusive, we have a negative number, whose absolute value can be computed with the

method that we have in the previous section.

We now assume that we have a circuit that can add positive three-digit numbers, and we shall see

how we can use it to add negative numbers in our representation.

Suppose again we want to add -2 and +5. The representations for these numbers with three

wheels are 998 and 005 respectively. Our addition circuit will attempt to add the two positive

numbers 998 and 005, which gives 1003. But since the addition circuit only has three digits, it

will truncate the result to 003, which is the right answer for our interpretation.

Page 93: Computer organisation nd architecture

A valid question at this point is in which situation our finite addition circuit will not work. The

answer is somewhat complicated. It is clear that it always gives the correct result when a positive

and a negative number are added. It is incorrect in two situations. The first situation is when two

positive numbers are added, and the result comes out looking like a negative number, i.e, with a

first digit somewhere between 5 and 9. You should convince yourself that no addition of two

positive numbers can yield an overflow and still look like a positive number. The second

situation is when two negative numbers are added and the result comes out looking like a non-

negative number, i.e, with a first digit somewhere between 0 and 4. Again, you should convince

yourself that no addition of two negative numbers can yield an underflow and still look like a

negative number.

We now have a circuit for addition of integers (positive or negative) in our representation. We

simply use a circuit for addition of only positive numbers, plus some circuits that check:

• If both numbers are positive and the result is negative, then report overflow.

• If both numbers are negative and the result is positive, then report underflow.

Finite-Precision Two’s Complement

So far, we have studied the representation of negative numbers using ten’s complement. In a

computer, we prefer using base two rather than base ten. Luckily, the exact method described in

the previous section works just as well for base two. For an n-bit adder (n is usually 32 or 64),

we can represent positive numbers with a leftmost digit of 0, which gives values between 0 and

2(n-1)

– 1, and negative numbers with a leftmost digit of 1, which gives values between -2(n – 1)

and -1.

The exact same rule for overflow and underflow detection works. If, when adding two positive

numbers, we get a result that looks negative (i.e. with its leftmost bit 1), then we have an

overflow. Similarly, if, when adding two negative numbers, we get a result that looks positive

(i.e. with its leftmost bit 0), then we have an underflow.

Rational Numbers

Integers are useful, but sometimes we need to compute with numbers that are not integer.

An obvious idea is to use rational numbers. Many algorithms, such as the simplex algorithm for

linear optimization, use only rational arithmetic whenever the input is rational.

There is no particular difficulty in representing rational numbers in a computer. It suffices to

have a pair of integers, one for the numerator and one for the denominator.

To implement arithmetic on rational numbers, we can use some additional restrictions on our

representation. We may, for instance, decide that:

• positive rational numbers are always represented as two positive integers (the other

possibility is as two negative numbers),

Page 94: Computer organisation nd architecture

• negative rational numbers are always represented with a negative numerator and a

positive denominator (the other possibility is with a positive numerator and a negative

denominator),

• the numerator and the denominator are always relative prime (they have no common

factors).

Such a set of rules makes sure that our representation is canonical, i.e., that the representation for

a value is unique, even though, a priori, many representations would work.

Circuits for implementing rational arithmetic would have to take such rules into account. In

particular, the last rule would imply dividing the two integers resulting from every arithmetic

operation with their largest common factor to obtain the canonical representation.

Rational numbers and rational arithmetic is not very common in the hardware of a computer. The

reason is probably that rational numbers don’t behave very well with respect to the size of the

representation. For rational numbers to be truly useful, their components, i.e., the numerator and

the denominator, both need to be arbitrary-precision integers. As we have mentioned before,

arbitrary precision anything does not go very well with fixed-size circuits inside the CPU of a

computer.

Programming languages, on the other hand, sometimes use arbitrary-precision rational numbers.

This is the case, in particular, with the language Lisp.

Self test Question 5.3 ����

1. In binary arithmetic, we simply reserve ___ bit to determine the sign.

2. The infinite precesion ten’s complement of -2 and +5 is _________.

3. The finite precesion three digit ten’s complement of -2 and +5 is ______.

4. using finite precesion ten’s complement, if both numbers for addition are positive and

results negative, then the circuit reports _________

5. __________ is not very common in the hardware of a computer

Answers to self test 5.3

1. one

2. ……98 & …..05

3. 998 & 005

4. overflow

5. Rational numbers and rational arithmetic

MC0062(B)5.4 Binary Arithmetic

Binary Arithmetic

Page 95: Computer organisation nd architecture

Inside a computer system, all operations are carried out on fixed-length binary values that

represent application-oriented values. The schemes used to encode the application information

have an impact on the algorithms for carrying out the operations. The unsigned (binary number

system) and signed (2’s complement) representations have the advantage that addition and

subtraction operations have simple implementations, and that the same algorithm can be used

for both representations. This note discusses arithmetic operations on fixed-length binary

strings, and some issues in using these operations to manipulate information.

It might be reasonable to hope that the operations performed by a computer always result in

correct answers. It is true that the answers are always “correct but we must always be careful

about what is meant by correct. Computers manipulate fixed-length binary values to produce

fixed-length binary values. The computed values are correct according to the algorithms that are

used; however, it is not always the case that the computed value is correct when the values are

interpreted as representing application information. Programmers must appreciate the difference

between application information and fixed-length binary values in order to appreciate when a

computed value correctly represents application information!

A limitation in the use of fixed-length binary values to represent application information is that

only a finite set of application values can be represented by the binary values. What happens if

applying an operation on values contained in the finite set results in an answer that is outside the

set? For example, suppose that 4-bit values are used to encode counting numbers, thereby

restricting the set of represented numbers to 0 .. 15. The values 4 and 14 are inside the set of

represented values. Performing the operation 4 + 14 should result in 18; however, 18 is outside

the set of represented numbers. This situation is called overflow, and programs must always be

written to deal with potential overflow situations.

Overflow in Integer Arithmetic

In the 2’s-complement number representation system, n bits can represent values in the range

.2n.1 to +2n.1 . 1. For example, using four bits, the range of numbers that can be represented is .8

through +7. When the result of an arithmetic operation is outside the representable range, an

arithmetic overflow has occurred.

When adding unsigned numbers, the carry-out, cn, from the most significant bit position serves

as the overflow indicator. However, this does not work for adding signed numbers. For example,

when using 4-bit signed numbers, if we try to add the numbers +7 and +4, the output sum vector,

S, is 1011, which is the code for .5, an incorrect result. The carry-out signal from the MSB

position is 0. Similarly, if we try to add .4 and .6, we get S = 0110 = +6, another incorrect result,

and in this case, the carry-out signal is 1. Thus, overflow may occur if both summands have the

same sign. Clearly, the addition of numbers with different signs cannot cause overflow. This

leads to the following conclusions:

1. Overflow can occur only when adding two numbers that have the same sign.

2. The carry-out signal from the sign-bit position is not a sufficient indicator of over- flow

when adding signed numbers.

Page 96: Computer organisation nd architecture

A simple way to detect overflow is to examine the signs of the two summands X and Y and the

sign of the result. When both operands X and Y have the same sign, an overflow occurs when the

sign of S is not the same as the signs of X and Y.

Binary Addition

The binary addition of two bits (a and b) is defined by the table:

When adding n-bit values, the values are added in corresponding bit-wise pairs, with each carry

being added to the next most significant pair of bits. The same algorithm can be used when

adding pairs of unsigned or pairs of signed values.

4-Bit Example A:

Since computers are constrained to deal with fixed-width binary values, any carry out of the most

significant bit-wise pair is ignored.

4-Bit Example B:

The binary values generated by the addition algorithm are always correct with respect to the

algorithm, but what is the significance when the binary values are intended to represent

application information? Will the operation yield a result that accurately represents the result of

adding the application values?

Page 97: Computer organisation nd architecture

First consider the case where the binary values are intended to represent unsigned integers (i.e.

counting numbers). Adding the binary values representing two unsigned integers will give the

correct result (i.e. will yield the binary value representing the sum of the unsigned integer

values) providing the operation does not overflow – i.e. when the addition operation is applied

to the original unsigned integer values, the result is an unsigned integer value that is inside of

the set of unsigned integer values that can be represented using the specified number of bits

(i.e. the result can be represented under the fixed-width constrains imposed by the

representation).

Reconsider 4-Bit Example A (above) as adding unsigned values:

In this case, the binary result (11012) of the operation accurately represents the unsigned integer

sum (13) of the two unsigned integer values being added (11 + 2), and therefore, the operation

did not overflow. But what about 4-Bit Example B (above)?

When the values added in Example B are considered as unsigned values, then the 4-bit result (1)

does not accurately represent the sum of the unsigned values (11 + 6)! In this case, the operation

has resulted in overflow: the result (17) is outside the set of values that can be represented using

4-bit binary number system values (i.e. 17 is not in the set {0 , … , 15}). The result (00012) is

correct according to the rules for performing binary addition using fixed-width values, but

truncating the carry out of the most significant bit resulted in the loss of information that was

important to the encoding being used. If the carry had been kept, then the 5-bit result (100012)

would have represented the unsigned integer sum correctly.

But more can be learned about overflow from the above examples! Now consider the case where

the binary values are intended to represent signed integers.

Reconsider 4-Bit Example A (above) as adding signed values:

Page 98: Computer organisation nd architecture

In this case, the binary result (11012) of the operation accurately represents the signed integer

sum (– 3) of the two signed integer values being added (– 5 + 2) � therefore, the operation did

not overflow. What about 4-Bit Example B?

In this case, the result (again) represents the signed integer answer correctly, and therefore,

the operation did not overflow.

Recall that in the unsigned case, Example B resulted in overflow. In the signed case, Example B

did not overflow. This illustrates an important concept: overflow is interpretation dependent !

The concept of overflow depends on how information is represented as binary values. Different

types of information are encoded differently, yet the computer performs a specific algorithm,

regardless of the possible interpretations of the binary values involved. It should not be

surprising that applying the same algorithm to different interpretations may have different

overflow results.

Subtraction

The binary subtraction of two bits (a and b) is defined by the table:

When subtracting n-bit values, the values are subtracted in corresponding bit-wise pairs, with

each borrow rippling down from the more significant bits as needed. If none of the more

significant bits contains a 1 to be borrowed, then 1 may be borrowed into the most significant bit.

4-Bit Example C:

Page 99: Computer organisation nd architecture

4-Bit Example D:

Most computers apply the mathematical identity:

a – b = a + ( – b )

to perform subtraction by negating the second value (b) and then adding. This can result in a

savings in transistors since there is no need to implement a subtraction circuit.

Another Note on Overflow

Are there easy ways to decide whether an addition or subtraction results in overflow ? Yes but

we should be careful that we understand the concept, and don’t rely on memorizing case rules

that allow the occurrence of overflow to be identified !

For unsigned values, a carry out of (or a borrow into) the most significant bit indicates that

overflow has occurred.

For signed values, overflow has occurred when the sign of the result is impossible for the signs

of the values being combined by the operation. For example, overflow has occurred if:

• two positive values are added and the sign of the result is negative

• a negative value is subtracted from a positive value and the result is negative (a positive

minus a negative is the same as a positive plus a positive, and should result in a positive

value, i.e. a – ( – b) = a + b )

Page 100: Computer organisation nd architecture

These are just two examples of some of the possible cases for signed overflow.

Note that it is not possible to overflow for some signed values. For example, adding a positive

and a negative value will never overflow. To convince yourself of why this is the case, picture

the two values on a number line as shown below. Suppose that a is a negative value, and b is a

positive value. Adding the two values, a + b will result in c such that c will always lie between a

and b on the number line. If a and b can be represented prior to the addition, then c can also be

represented, and overflow will never occur.

Multiplication

Multiplication is a slightly more complex operation than addition or subtraction. Multiplying

two n-bit values together can result in a value of up to 2n-bits. To help to convince yourself of

this, think about decimal numbers: multiplying two 1-digit numbers together result in a 1- or 2-

digit result, but cannot result in a 3-digit result (the largest product possible is 9 x 9 = 81). What

about multiplying two 2-digit numbers? Does this extrapolate to n-digit numbers? To further

complicate matters, there is a reasonably simple algorithm for multiplying binary values that

represent unsigned integers, but the same algorithm cannot be applied directly to values that

represent signed values (this is different from addition and subtraction where the same

algorithms can be applied to values that represent unsigned or signed values!).

Overflow is not an issue in n-bit unsigned multiplication, proving that 2n-bits of results are kept.

Now consider the multiplication of two unsigned 4-bit values a and b. The value b can be

rewritten in terms of its individual digits: b = b3

∗ 23 + b2

∗ 22 + b1

∗ 21+ b0

∗ 20

Substituting this into the product a

∗ b gives: a

∗ (b3

∗ 23 + b2

∗ 22 + b1

∗ 21+ b0

∗ 20

)

Page 101: Computer organisation nd architecture

Which can be expanded into: a

b3

∗ 23 + a

∗ b2

∗ 22 + a

∗ b1

∗ 21+ a

∗ b0

∗ 20

The possible values of bi are 0 or 1. In the expression above, any term where bi = 0 resolves to 0

and the term can be eliminated. Furthermore, in any term where bi = 1, the digit bi is redundant

(multiplying by 1 gives the same value, and therefore the digit bi can be eliminated from the

term. The resulting expression can be written and generalized to n-bits:

This expression may look a bit intimidating, but it turns out to be reasonably simple to

implement in a computer because it only involves multiplying by 2 (and there is a trick that lets

computers do this easily!). Multiplying a value by 2 in the binary number system is analogous to

multiplying a value by 10 in the decimal number system. The result has one new digit: a 0 is

injected as the new least significant digit, and all of the original digits are shifted to the left as the

new digit is injected.

Think in terms of a decimal example, say: 37 ∗ 10 = 370. The original value is 37 and the result

is 370. The result has one more digit than the original value, and the new digit is a 0 that has

been injected as the least significant digit. The original digits (37) have been shifted one digit to

the left to admit the new 0 as the least significant digit.

The same rule holds for multiplying by 2 in the binary number system. For example:

1012

∗ 2 = 10102. The original value of 5 (1012) is multiplied by 2 to give 10 (10102). The result can

be obtained by shifting the original value left one digit and injecting a 0 as the new least

significant digit.

The calculation of a product can be reduced to summing terms of the form a

∗ 2i. The multiplication by 2

i can be reduced to shifting left i times! The shifting of binary values

in a computer is very easy to do, and as a result, the calculation can be reduced to a series of

shifts and adds.

Page 102: Computer organisation nd architecture

Unsigned Integer Multiplication: Straightforward Method

To compute: a * b

where

• Register A contains a = an-1an-2…a1a0

• Register B contains b = bn-1bn-2…b1b0

and where register P is a register twice as large as A or B

Simple Multiplication Algorithm: Straightforward Method

Steps:

1. If LSB(A) = 1, then set Pupper to Pupper + b

2. Shift the register A right

3. Using zero sign extension (for unsigned values)

4. Forcing the LSB(A) to fall off the lower end

5. Shift the double register P right

6. Using zero sign extension (for unsigned values)

7. Pushing the LSB(Pupper) into the MSB(Plower)

After n times (for n-bit values),

The full contents of the double register P = a * b

Example of Simple Multiplication Algorithm (Straightforward Method)

Multiply b = 2 = 00102 by a = 3 = 00112

(Answer should be 6 = 01102)

(Answer should be 6 = 01102)

P A B Comments

0000 0000 0011 0010 Start: A = 0011; B = 0010; P = (0000, 0000)

0010 0000 0011 0010 LSB(A) = l ==> Add b to P

0010 0000 0001 0010 Shift A right

0001 0000 0001 0010 Shift P right

0011 0000 0001 0010 LSB(A) = l ==> Add b to P

0011 0000 0000 0010 Shift A right

0001 1000 0000 0010 Shift P right

0001 1000 0000 0010 LSB(A) = 0 ==> Do nothing

0001 1000 0000 0010 Shift A right

Page 103: Computer organisation nd architecture

0000 1100 0000 0010 Shift P right

0000 1100 0000 0010 LSB(A) = 0 ==> Do nothing

0000 1100 0000 0010 Shift A right

0000 0110 0000 0010 Shift P right

0000 0110 0000 0010 Done: P is product

Unsigned Integer Multiplication: A More Efficient Method

To compute: a * b

where

• Register A contains a = an-1an-2…a1a0

• Register B contains b = bn-1bn-2…b1b0

and where register P is connected to register A to form a register twice as large as A or B and

register P is the upper part of the double register (P, A)

Simple Multiplication Algorithm: A More Efficient Method

Steps:

1. If LSB(A) = 1, then set P to Pupper + b

2. Shift the double register (P, A) right

3. Using zero sign extension (for unsigned values)

4. Forcing the LSB(A) to fall off the lower end

5. Pushing the LSB(Pupper) into MSB(Plower) = MSB(A)

After n times (for n-bit values),

The contents of the double register (P, A) = a * b

Example of Simple Multiplication Algorithm (A More Efficient Method)

Multiply b = 2 = 00102 by a = 3 = 00112

(Answer should be 6 = 01102)

P A B Comments

0000 0011 0010 Start: A = 0011; B = 0010; P = (0000, 0000)

0010 0011 0010 LSB(A) = l ==> Add b to P

0001 0001 0010 Shift (P, A) right

0011 0001 0010 LSB(A) = l ==> Add b to P

0001 1000 0010 Shift (P, A) right

0001 1000 0010 LSB(A) = 0 ==> Do nothing

Page 104: Computer organisation nd architecture

0000 1100 0010 Shift (P, A) right

0000 1100 0010 LSB(A) = 0 ==> Do nothing

0000 0110 0010 Shift (P, A) right

0000 0110 0010 Done: P is product

By combining registers P and A, we can eliminate one extra shift step per iteration.

Positive Integer Multiplication

Shift-Add Multiplication Algorithm

(Same as Straightforward Method above)

• For unsigned or positive operands

• Repeat the following steps n times:

1. If LSB(A) = 1, then set Pupper to Pupper + b else set Pupper to Pupper + 0

2. Shift the double register (P, A) right

3. Using zero sign extension (for positive values)

4. Forcing the LSB(A) to fall off the lower end

Signed Integer Multiplication

Simplest:

1. Convert negative values of a or b to positive values

2. Multiply both positive values using one of the two algorithms above

3. Adjust sign of product appropriately Alternate: Booth algorithm

Introduction to Booth’s Multiplication Algorithm

A powerful algorithm for signed-number multiplication is the booth algorithm. It generates a 2n-

bit product and treats both positive and negative numbers uniformly.

Consider a positive binary number containing a run of ones e.g., the 8-bit value: 00011110.

Multiplying by such a value implies four consecutive additions of shifted multiplicands

Page 105: Computer organisation nd architecture

This means that the same multiplication can be obtained using only two additions:

• + 25 x multiplicand of 00010100

• -21 x multiplicand of 00010100

Since the 1s complement of 00010100 is 11101011,

Then -00010100 is 11101100 in 2s complement

In other words, using sign extension and ignoring the overflow,

Page 106: Computer organisation nd architecture

Booth’s Recoding Table

Booth recodes the bits of the multiplier a according to the following table:

ai ai-1 Recoded ai

0 0 0

0 1 +1

1 0 -1

1 1 0

Always assume that there is a zero to the right of the multiplier

i.e. that a-1 is zero

so that we can consider the LSB (= a0)

and an imaginary zero bit to its right

So, if a is 00011110 ,

the recoded value of a is

0 0 +1 0 0 0 -1 0

Example of Booth’s Multiplication

1. Multiply 00101101 (b) by 00011110 (a) using normal multiplication

Page 107: Computer organisation nd architecture

1. Multiply 00101101 (b) by 00011110 (a) using Booth’s multiplication

Recall the recoded value of 00011110 (a) is 0 0 +1 0 0 0 -1 0

The 1s complement of 00101101 (b) is 11010010

The 2s complement of 00101101 (b) is 11010011

Hence, using sign extension and ignoring overflow:

Booth’s Multiplication Algorithm

Booth’s algorithm chooses between + b and - b depending on the two current bits of a .

Algorithm:

1. Assume the existence of bit a-1 = 0 initially

2. Repeat n times (for multiplying two n-bit values):

1. At step i:

2. If ai = 0 and ai-1 = 0, add 0 to register P

3. If ai = 0 and ai-1 = 1, add b to register P

i.e. treat ai as +1

4. If ai = 1 and ai-1 = 0, subtract b from register P

i.e. treat ai as -1

5. If ai = 1 and ai-1 = 1, add 0 to register P

Page 108: Computer organisation nd architecture

6. Shift the double register (P, A) right one bit with sign extension

Justification of Booth’s Multiplication Algorithm

Consider the operations described above:

ai ai-1 ai-1 – ai Operation

0 0 0 Add 0xb to P

0 1 +1 Add +1xb to P

1 0 -1 Add -1xb to P

1 1 0 Add 0xb to P

Equivalently, the algorithm could state

1. Repeat n times (for multiplying two n-bit values): At step i, add (ai-1 – ai) x b to register P

The result of all n steps is the sum:

(a-1 – a0) x 20 x b

+ (a0 – a1) x 21 x b

+ (a1 – a2) x 22 x b …

+ (an-3 – an-2) x 2 n-2

x b

+ (an-2 – an-1) x 2 n-1

x b

where a-1 is assumed to be 0

This sum is equal to b x SUMi=0n-1

((ai-1 – ai) x 2 i)

= b( -2 n-1

an-1 + 2 n-2

an-2 + … + 2a1 + a0) + ba-1

= b( -2 n-1

an-1 + 2 n-2

an-2 + … + 2a1 + a0)

since a-1 = 0

Now consider the representation of a as a 2s complement

It can be shown to be the same as

-2 n-1

an-1 + 2 n-2

an-2 + … + 2a1 + a0

where an-1 represents the sign of a

Page 109: Computer organisation nd architecture

• If an-1 = 0, a is a positive number

• If an-1 = 1, a is a negative number

Example:

• Let w = 10112

• Then w = -1×23 + 0×22 + 1×21 + 1×20 = -8 + 2 + 1 = -5

Thus the sum above,

b( -2 n-1

an-1 + 2 n-2

an-2 + … + 2a1 + a0) ,

is the same as the 2s complement representation of b x a

Example of Booth’s Multiplication Algorithm (Positive Numbers)

Multiply b = 2 = 00102 by a = 6 = 01102

(Answer should be 12 = 11002)

The 2s complement of b is 1110

P A B Comments

0000 0110 [0] 0010 Start: a-1 = 0

0000 0110 [0] 0010 00 ==> Add 0 to P

0000 0011 [0] 0010 Shift right

1110 0011 [0] 0010 10 ==> Subtract b from P

1111 0001 [1] 0010 Shift right

1111 0001 [1] 0010 11 ==> Add 0 to P

1111 1000 [1] 0010 Shift right

0001 1000 [1] 0010 01 ==> Add b to P

0000 1100 [0] 0010 Shift right

0000 1100 [0] 0010 Done: (P, A) is product

Example of Booth’s Multiplication Algorithm (Negative Numbers)

Multiply b = -5 = -(0101)2 = 10112 by a = -6 = -(0110)2 = 10102

(Answer should be 30 = 111102)

The 2s complement of b is 1110

P A B Comments

0000 1010 [0] 1011 Start: a-1 = 0

0000 1010 [0] 1011 00 ==> Add 0 to P

Page 110: Computer organisation nd architecture

0000 0101 [0] 1011 Shift right

0101 0101 [0] 1011 10 ==> Subtract b from P

0010 1010 [1] 1011 Shift right

1101 1010 [1] 1011 01 ==> Add b to P

1110 1101 [0] 1011 Shift right

0011 1101 [0] 1011 10 ==> Subtract b from P

0001 1110 [1] 1011 Shift right

0001 1110 [1] 1011 Done: (P, A) is product

Advantages and Disadvantages of Booth’s Algorithm

Advantage

Handles positive and negative numbers uniformly

Efficient when there are long runs of ones in the multiplier

Disadvantage

Average speed of algorithm is about the same as with the normal multiplication algorithm

Worst case operates at a slower speed than the normal multiplication algorithm

Division

Terminology: dividend ÷ divisor = quotient & remainder

The implementation of division in a computer raises several practical issues:

• For integer division there are two results: the quotient and the remainder.

• The operand sizes (number of bits) to be used in the algorithm must be considered (i.e.

the sizes of the dividend, divisor, quotient and remainder).

• Overflow was not an issue in unsigned multiplication, but is a concern with division.

• As with multiplication, there are differences in the algorithms for signed vs. unsigned

division.

Recall that multiplying two n-bit values can result in a 2n-bit value. Division algorithms are often

designed to be symmetrical with this by specifying:

• the dividend as a 2n-bit value

• the divisor, quotient and remainder as n-bit values

Once the operand sizes are set, the issue of overflow may be addressed. For example, suppose

that the above operand sizes are used, and that the dividend value is larger than a value that can

be represented in n bits (i.e. 2n – 1 < dividend). Dividing by 1 (divisor = 1) should result with

Page 111: Computer organisation nd architecture

quotient = dividend; however, the quotient is limited to n bits, and therefore is incapable of

holding the correct result. In this case, overflow would occur. �

Sign Extension / Zero Extension

When loading a 16-bit value into a 32-bit register,

How is the sign retained?

By loading the 16-bit value into the lower 16 bits of the 32-bit register and

By duplicating the MSB of the 16-bit value throughout the upper 16 bits of the 32-bit register

This is called sign extension

If instead, the upper bits are always set to zero,

It is called zero extension

Unsigned Integer Division

To compute: a / b

where

• Register A contains a = an-1an-2…a1a0

• Register B contains b = bn-1bn-2…b1b0

and where register P is connected to register A to form a register twice as large as A or B and

register P is the upper part of the double register as done in many of the Multiplication methods.

Simple Unsigned Division

– for unsigned operands

Steps:

1. Shift the double register (P, A) one bit left

2. Using zero sign extension (for unsigned values)

3. And forcing the MSB(P) to fall off the upper end

4. Subtract b from P

5. If result is negative, then set LSB(A) to 0 else set LSB(A) to 1

6. If result is negative, set P to P + b

After repeating these steps n times (for n-bit values), the contents of register A = a / b, and the

contents of register P = remainder (a / b)

Page 112: Computer organisation nd architecture

This is also called restoring division

Restoring Division Example

Divide 14 = 11102 by 3 = 00112

P A B Comments

+00000 1110 0011 Start

+00001 1100 0011 Shift left

-00010 1100 0011 Subtract b

+00001 1100 0011 Restore

+00001 1100 0011 Set LSB(A) to 0

+00011 1000 0011 Shift left

+00000 1000 0011 Subtract b

+00000 1001 0011 Set LSB(A) to 1

+00001 0010 0011 Shift left

-00010 0010 0011 Subtract b

+00001 0010 0011 Restore

+00001 0010 0011 Set LSB(A) to 0

+00010 0100 0011 Shift left

-00001 0100 0011 Subtract b

+00010 0100 0011 Restore

+00010 0100 0011 Set LSB(A) to 0

+00010 0100 0011 Done

Restoring versus Nonrestoring Division

Let r = the contents of (P, A)

At each step, the restoring algorithm computes

P = (2r – b)

If (2r – b) < 0,

then

• (P, A) = 2r (restored)

• (P, A) = 4r (shifted left)

• (P, A) = 4r – b (subtract b for next step)

If (2r – b) < 0

and there is no restoring,

Page 113: Computer organisation nd architecture

then

• (P, A) = 2r – b (restored)

• (P, A) = 4r – 2b (shifted left)

• (P, A) = 4r – b (add b for next step)

Nonrestoring Division Algorithm

Steps:

1. If P is negative,

1. Shift the double register (P, A) one bit left

2. Add b to P

2. Else if P is not negative,

1. Shift the double register (P, A) one bit left

2. Subtract b from P

3. If P is negative, then set LSB(A) to 0 else set LSB(A) to 1

After repeating these steps n times (for n-bit values),

if P is negative, do a final restore

i.e. add b to P

Then

the contents of register A = a / b, and

the contents of register P = remainder (a / b)

Nonrestoring Division Example

Divide 14 = 11102 by 3 = 00112

P A B Comments

00000 1110 0011 Start

00001 1100 0011 Shift left

11110 1100 0011 Subtract b

11110 1100 0011 P negative; Set LSB(A) to 0

11101 1000 0011 Shift left

00000 1000 0011 P negative; Add b

00000 1001 0011 P positive; Set LSB(A) to 1

00001 0010 0011 Shift left

11110 0010 0011 Subtract b

11110 0010 0011 P negative; Set LSB(A) to 0

Page 114: Computer organisation nd architecture

11100 0100 0011 Shift left

11111 0100 0011 P negative; Add b

11111 0100 0011 P negative; Set LSB(A) to 0

00010 0100 0011 P=remainder negative; Need final restore

00010 0100 0011 Done

Division by Multiplication

There are numerous algorithms for division many of which do not relate as well to the division

methods learned in elementary school One of these is Division by Multiplication It is also known

as Goldschmidt’s algorithm.

If we wish to divide N by D, then we are looking for Q such that Q = N/D

Since N/D is a fraction, we can multiply both the numerator and denominator by the same value,

x, without changing the value of Q. This is the basic idea of the algorithm. We wish to find some

value x such that Dx becomes close to 1 so that we only need to compute Nx To find such an x,

first scale D by shifting it right or left so that 1/2 <= D < 1

Let s = #shifts be positive if to the left

and negative, otherwise

Call this D’ (= D · 2s)

Now compute x such that x = 1 – D’

Call this Z

Notice that 0 < Z <= 1/2

since Z = 1 – D’

Furthermore D’ = 1 – Z

Then

Q = N / D

= N (1+Z) / D (1+Z)

= 2s N (1+Z) / 2

s D (1+Z)

= 2s N (1+Z) / D’ (1+Z)

= 2s N (1+Z) / (1-Z) · (1+Z)

Page 115: Computer organisation nd architecture

= 2s N (1+Z) / (1-Z

2) �

Similarly,

Q = N / D

= 2s N (1+Z) / (1-Z

2)

= 2s N (1+Z) (1+Z

2) / (1-Z

2) (1+Z

2)

= 2s N (1+Z) (1+Z

2) / (1-Z

4)

And,

Q = N / D

= 2s N (1+Z) (1+Z

2) / (1-Z

4)

= 2s N (1+Z) (1+Z

2) (1+Z

4) / (1-Z

4) (1+Z

4)

= 2s N (1+Z) (1+Z

2) (1+Z

4) / (1-Z

8)

continuing to

Q = N / D

= 2s N (1+Z) (1+Z

2) (1+Z

4) · · · (1+Z

2n-1) / (1-Z

2n)

Since 0 < Z <= 1/2,

Zi goes to zero as i goes to infinity. This means that the denominator, 1-Z

2n, goes to 1 as n gets

larger. Since the denominator goes to 1, we need only compute the numerator in order to

determine (or approximate) the quotient, Q

So, Q = 2s N (1+Z) (1+Z

2) (1+Z

4) · · · (1+Z

2n-1)

Examples of Division by Multiplication

Example 1:

Let’s start with a simple example in the decimal system

Suppose N = 1.8 and D = 0.9

(Clearly, the answer is 2)

Page 116: Computer organisation nd architecture

Since D = 0.9, it does not need to be shifted

so the number of shifts, s, is zero

And since we are in the decimal system,

we use 10s in the equation for Q

Furthermore, Z = 1 – D = 1 – 0.9 = 0.1

For n = 0,

Q = N / D

= 1.8 / 0.9

= 100 · 1.8

= 1.8 �

For n = 1,

Q = N / D

= 1.8 / 0.9

= 100 · 1.8 (1 + Z)

= 1.8 (1 + 0.1)

= 1.8 · 1.1

= 1.98 �

For n = 2,

Q = N / D

= 1.8 / 0.9

= 100 · 1.8 (1 + Z) (1 + Z

2)

= 1.8 (1 + 0.1) (1 + 0.01)

= 1.8 · 1.1 · 1.01

Page 117: Computer organisation nd architecture

= 1.98 · 1.01

= 1.9998

For n = 3,

Q = N / D

= 1.8 / 0.9

= 100 · 1.8 (1 + Z) (1 + Z

2) (1 + Z

4)

= 1.8 (1 + 0.1) (1 + 0.01) (1 + 0.0001)

= 1.8 · 1.1 · 1.01 · 1.0001

= 1.98 · 1.01 · 1.0001

= 1.9998 · 1.0001

= 1.99999998

And so on, getting closer and closer to 2.0 with each increase of n

Self test Question 5.4 ����

1. In binary arithmetic, we simply reserve ___ bit to determine the sign.

2. 4-bit fixed length binary values are used to encode counting numbers, thereby restricting

the set of represented numbers to _______.

3. Using 4-bit fixed length binary, the addition of 4 and 14 results in ______

4. When does an arithmetic overflow occurs?

5. The two’s complement of -5 is

6. (1010)2 – (0001) for unsigned integers arithmetic results as _____ integer and for signed

integers arithmetic results as _____ integer

Answers to self test 5.4

1. Rational numbers and rational arithmetic

2. 0 to 15

3. overflow

4. When the result of an arithmetic operation is outside the representable range

5. (1011)

6. 9 & -7

Page 118: Computer organisation nd architecture

MC0062(B)5.5 Floating Point Numbers

Floating Point Numbers

Instead of using the obvious representation of rational numbers presented in the previous section,

most computers use a different representation of a subset of the rational numbers. We call these

numbers floating-point numbers.

Floating-point numbers use inexact arithmetic, and in return require only a fixed-size

representation. For many computations (so-called scientific computations, as if other

computations weren’t scientific) such a representation has the great advantage that it is fast,

while at the same time usually giving adequate precision.

There are some (sometimes spectacular) exceptions to the “adequate precision” statement in the

previous paragraph, though. As a result, an entire discipline of applied mathematics, called

numerical analysis, has been created for the purpose of analyzing how algorithms behave with

respect to maintaining adequate precision, and of inventing new algorithms with better properties

in this respect.

The basic idea behind floating-point numbers is to represent a number as mantissa and an

exponent, each with a fixed number of bits of precision. If we denote the mantissa with m and the

exponent with e, then the number thus represented is m * 2e.

Again, we have a problem that a number can have several representations. To obtain a canonical

form, we simply add a rule that m must be greater than or equal to 1/2 and strictly less than 1. If

we write such a mantissa in binal (analogous to decimal) form, we always get a number that

starts with 0.1. This initial information therefore does not have to be represented, and we

represent only the remaining “binals”.

The reason floating-point representations work well for so-called scientific applications, are that

we more often need to multiply or divide two numbers. Multiplication of two floating-point

numbers is easy to obtain. It suffices to multiply the mantissas and add the exponents. The

resulting mantissa might be smaller than 1/2, in fact, it can be as small as 1/4. In this case, the

result needs to be canonicalized. We do this by shifting the mantissa left by one position and

subtracting one from the exponent. Division is only slightly more complicated. Notice that the

imprecision in the result of a multiplication or a division is only due to the imprecision in the

original operands. No additional imprecision is introduced by the operation itself (except

possibly 1 unit in the least significant digit). Floating-point addition and subtraction do not have

this property.

To add two floating-point numbers, the one with the smallest exponent must first have its

mantissa shifted right by n steps, where n is the difference of the exponents. If n is greater than

the number of bits in the representation of the mantissa, the second number will be treated as 0 as

far as addition is concerned. The situation is even worse for subtraction (or addition of one

positive and one negative number). If the numbers have roughly the same absolute value, the

Page 119: Computer organisation nd architecture

result of the operation is roughly zero, and the resulting representation may have no correct

significant digits.

The two’s complement representation that we mentioned above is mostly useful for addition and

subtraction. It only complicates things for multiplication and division. For multiplication and

division, it is better to use a representation with sign + absolute value. Since multiplication and

division is more common with floating-point numbers, and since they result in multiplication and

division of the mantissa, it is more advantageous to have the mantissa represented as sign +

absolute value. The exponents are added, so it is more common to use two’s complement (or

some related representation) for the exponent.

Usually, computers manipulate data in chunks of 8, 16, 32, 64, or 128 bits. It is therefore useful

to fit a single floating-point number with both mantissa and exponent in such a chunk. In such a

chunk, we need to have room for the sign (1 bit), the mantissa, and the exponent. While there are

many different ways of dividing the remaining bits between the mantissa and the exponent, in

practice most computers now use a norm called IEEE, which mandates the formats as shown in

figure 5.2

Figure 5.2: Formats of floating point numbers.

Floating Point Variables

Floating point variables have been represented in many different ways inside computers of the

past. But there is now a well adhered to standard for the representation of floating point

variables. The standard is known as the IEEE Floating Point Standard (FPS). Like scientific

notation, FPS represents numbers with multiple parts, a sign bit, one part specifying the mantissa

and a part representing the exponent. The mantissa is represented as a signed magnitude integer

(i.e., not 2’s Compliment), where the value is normalized. The exponent is represented as an

unsigned integer which is biased to accommodate negative numbers. An 8-bit unsigned value

would normally have a range of 0 to 255, but 127 is added to the exponent, giving it a range of -

126 to +127.

Page 120: Computer organisation nd architecture

Follow these steps to convert a number to FPS format.

1. First convert the number to binary.

2. Normalize the number so that there is one nonzero digit to the left of the binary place,

adjusting the exponent as necessary.

3. The digits to the right of the binary point are then stored as the mantissa starting with the

most significant bits of the mantissa field. Because all numbers are normalized, there is no

need to store the leading 1.

Note: Because the leading 1 is dropped, it is no longer proper to refer to the storedvalue as the

mantissa. In IEEE terms, this mantissa minus its leading digit is called the significant.

4. Add 127 to the exponent and covert the resulting sum to binary for the stored exponent

value. For double precision, add 1023 to the exponet. Be sure to include all 8 or 11 bits of

the exponent.

5. The sign bit is a one for negative numbers and a zero for positive numbers.

6. Compilers often express FPS numbers in hexadecimal, so a quick conversion to

hexadecimal might be desired.

Here are some examples using single precision FPS.

• 3.5 = 11.1 (binary)

= 1.11 x 2^1 sign = 0, significand = 1100…,

exponent = 1 + 127 = 128 = 10000000

FPS number (3.5) = 0100000001100000…

= 0 x 40600000

• 100 = 1100100 (binary)

= 1.100100 x 2^6 sign = 0, significand = 100100…,

exponent = 6 + 127 = 133 = 10000101

FPS number (100) = 010000101100100…

= 0 x 42c80000

• What decimal number is represented in FPS as 0 x c2508000?

Page 121: Computer organisation nd architecture

Here we just reverse the steps.

0xc2508000 = 11000010010100001000000000000000 (binary)

sign = 1; exponent = 10000100; significand =

10100001000000000000000

exponent = 132 ==> 132 – 127 = 5

-1.10100001 x 2^5 = -110100.001 = -52.125

Floating Point Arithmetic

Until fairly recently, floating point arithmetic was performed using complex algorithms with a

integer arithmetic ALU. The main ALU in CPUs is still an integer arithmetic ALU. However, in the

mid-1980s, special hardware was developed to perform oating point arithmetic. Intel, for

example, sold a chip known as the 80387 which was a math co-processor to go along with the

80386 CPU. Most people did not buy the 80387 because of the cost. A major selling point of the

80486 was that the math co-processor was integrated onto the CPU which eliminated to

purchase a separate chip to get faster floating point arithmetic.

Floating point hardware usually has a special set of registers and instructions for performing

floating point arithmetic. There are also special instructions for moving data between memory or

the normal registers and the floating point registers.

Addition of Floating-Point Numbers

The steps (or stages) of a floating-point addition:

1. The exponents of the two floating-point numbers to be added are compared to find the

number with the smallest magnitude

2. The significand of the number with the smaller magnitude is shifted so that the exponents

of the two numbers agree

3. The significands are added

4. The result of the addition is normalized

5. Checks are made to see if any floating-point exceptions occurred during the addition,

such as overflow

6. Rounding occurs

Floating-Point Addition Example

Example: s = x + y

• numbers to be added are x = 1234.00 and y = -567.8

• these are represented in decimal notation with a mantissa (significand) of four digits

Page 122: Computer organisation nd architecture

• six stages (A – F) are required to complete the addition

Step A B C D E F

X 0.1234E4 0.12340E4

Y -0.05678E3 -0.05678E4

S

0.066620E4 0.6662E3 0.6662E3 0.6662E3

(For this example, we are throwing out biased exponents and the assumed 1.0 before the

magnitude. Also all numbers are in the decimal number system and no complements are used.)

Time for Floating-Point Addition

Consider a set of floating-point additions sequentially following one another

(as in adding the elements of two arrays)

Assume that each stage of the addition takes t time units

Time: t 2t 3t 4t 5t 6t 7t 8t

Step

A x1 + y1

x2 + y2

B

x1 + y1 x2 + y2

C

x1 + y1 D

x1 + y1

E

x1 + y1

F

x1 + y1

Each floating-point addition takes 6t time units

Pipelined Floating-Point Addition

With the proper architectural design, the floating-point addition stages can be overlapped

Time: t 2t 3t 4t 5t 6t 7t 8t

Step

A x1 + y1 x2 + y2 x3 + y3 x4 + y4 x5 + y5 x6 + y6 x7 + y7 x8 + y8

B

x1 + y1 x2 + y2 x3 + y3 x4 + y4 x5 + y5 x6 + y6 x7 + y7

C

x1 + y1 x2 + y2 x3 + y3 x4 + y4 x5 + y5 x6 + y6

D

x1 + y1 x2 + y2 x3 + y3 x4 + y4 x5 + y5

E

x1 + y1 x2 + y2 x3 + y3 x4 + y4

F

x1 + y1 x2 + y2 x3 + y3

Page 123: Computer organisation nd architecture

This is called pipelined floating-point addition Once the pipeline is full and has produced the

first result in 6t time units, it only takes 1t time units to produce each succeeding sum.

Self test Question 5.5 �

1. Floating-point numbers is to represent a number ––––––––––––– .

2. IEEE FPS represents numbers with ____________________ .

3. The mantissa is represented as a _____ integer.

4. The exponent is represented as an _______ integer.

5. Floating point hardware usually has a special set of ________ and ________ for

performing floating point arithmetic.

Answers to self test 5.5

1. as mantissa and an exponent

2. a sign bit, the mantissa and the exponent

3. signed magnitude

4. unsigned

5. registers & instructions

MC0062(B)5.6 Real numbers

Real numbers

One sometimes hears variations on the phrase “computers can’t represent real numbers exactly”.

This, of course is not true. Nothing prevents us from representing (say) the square-root of two as

the number two and a bit indicating that the value is the square root of the representation. Some

useful operations could be very fast this way. It is true, though that we cannot represent all real

numbers exactly. In fact, we have a similar problem that we have with rational numbers, in that it

is hard to pick a useful subset that we can represent exactly, other than the floating-point

numbers.

For this reason, no widespread hardware contains built-in real numbers other than the usual

approximations in the form of floating-point.

Summary

The ALU operates only on data using the registers which are internal to the CPU. All computers

deal with numbers. We have studied the instructions that perform basic arithmetic operations on

data operands in previous unit. To understand task carried out by the ALU, we introduced the

representation of numbers in a computer and how they are manipulated in addition and

subtraction operations.

Page 124: Computer organisation nd architecture

Next, we dealt with various core operations to be performed in arithmetic viz. Addition,

Subtraction, Multiplication and Division. We discussed the various techniques for fixed point

unsigned and signed numbers arithmetic. We also discussed Booth algorithm for multiplication

of binary numbers. A separate section was dedicated to the Floating point standard prevalent

today that included the IEEE standard was also discussed. We also have seen the addition

operation for floating point numbers giving stress on the time constraint and pipelining

concepts.

Exercise:

1. Explain the addition of a two floating point numbers with examples.

2. Discuss the different formats of floating point numbers.

3. Compute the product of 7 by 2 using booth’s algorithm.