Navi
Showing posts with label Chapter 4 (Memory-System-Architecture). Show all posts

CMOS & External Memory

Hi everyone,

haiz...there is very sad and happy today.
Sad= This is the last post.
Happy= Finally I finished my mission already.

Anyway, hope all my post will help your guys something.


Let's start it together :)

What is CMOS?
CMOS actually is Complementary Metal Oxide Semiconductor.
-It requires very little power to retain its contents.
-It's able to store vital data of the computer system even when the power supply is turned off.
-It can hold the data involve floppy,hard-disk drives.
-It also able to kept the data of display system.(eg. date, time, system setup parameters.)




I believe that your guys have some idea with the external memory, right?

Normally, external memory can be separate with 3 common type:

1)Magnetic Disk

●Substrate used to be aluminium
●Improve surface uniformity
●Reduction in surface defects
●Lower flight heights
●Better stiffness and shock/damage resistance

Two types of the Magnetic Disk:

RAID(Redundant Array of Independent Disks)
-Lastest
-6 levels in common use but they don't imply hierarchical relationship.
-3 common characteristics:
  a)Set of physical disks viewed as single logical drive by O/S
  b)Data distributed across physical drives
  c)Can use redundant capacity to store parity information





-Removable (Floppy)

2)Optical Storage

-CD-ROM
-CD-Recordable(CD-R)
-CD-R\W
-DVD
-blu-ray

3)Magnetic Tape
●Serial access
●Slower
●Cheapest
●Backup and archive




Read and Write Mechanisms
Record & retrieval though conductive coil which called a head
During read/ write process, the head is stationary and the platter will rotates.









Characteristics

Head motion: Fixed (rare) or movable head
Disk portability: Removable or fixed (non-removable)
Magnetized-Coating SidesSingle or Double Sided
Platters: Single or Multiple platter
Head Mechanism
-Contact(Floppy)
-Fixed gap
-Aerodynamic








@The End@

Hope your guys will like my tutorial of Memory System Architecture.
If there got any problem or wrong, please kindly comment in my posted. I am very appreciated your suggestion.Thank you.


                                                                                                                                 Written by--®æŋ

Cache Replacement & Cache Write Policy


Hi guys~

Now i am going to introduce the cache replacement and its write policy.

Cache replacement is meant some existing cache memory will be delete to create space for the new entry when the cache is full already.

Here are the replacement algorithm:

1) Direct mapping
    -No choice
    -Replace line
    -Each block only maps to one line

2) Associative & Set Associative
    -Hardware implemented algorithm (speed)
    -Least recently Used(LRU)
    -First In First Out (FIFO)
    -Least Frequently Used (LFU)
    -Random






Write Policy
- Must not overwrite a cache block unless main memory is up to date.
- Multiple CPUs may have individual caches.
- I/O may address main memory directly.


There is 2 type of the write policy:

1)Write though

-Slows down write.
-Lots of traffic.
-All writes go to main memory as well as cache.

2)Write back

-Updates initially made in cache only.
-Updates bit for cache slot is set when update occurs.
-Other caches get out of sync.
-I/O must access main memory through cache.




Ya, that's all for the cache replacement & cache write policy.
Next post will be discuss about the CMOS & External Memory.
Thank you for viewing~~ :)


                                                                                            Written by--®æŋ

Solve the Mapping Function Question


Solve the Mapping Function Question:
Question 1
A computer system has a memory architecture made up of a main memory of 128 MB and
cache of 128 KB. In order to perform an efficient mapping function, the main memory is
arranged in block of 8 bytes. Draw the address structure for the different mapping functions as
below. (Indicate the fields and the number of bits required for each field.)

-Direct Mapping 

From the question, we know that:

Main memory = 128 MB = (128 x 1024 ) KB = (128 x 1024 x 1024 ) Bytes
Cache capacity = 128 KB = (128 x 1024 ) Bytes
Block capacity = 8 Bytes

Step 1:
Using "log A(Bytes) / log 2" to convert bytes to bits.

Main Memory = log (128 x 1024 x 1024 ) / log 2 = 27 bits

Lines in Cache = Cache capacity / Block Capacity =( 128 x 1024 ) / 8 =( 16 x 1024 ) bytes.
Convert it into bits. 
Lines in Cache =  (16 x 1024 ) bytes = log ( 16 x 1024 ) / log 2 =14 bits.

Block Capacity(Size) = 8 Bytes = log (8) / log (2) = 3 bits.

Step 2:
Find the Line, Tag and Word.


r = lines in cache, w = block size(capacity), s = main memory address length - block size.

r = 14 bits 
w = 3 bits
s = 27- 3 = 24 bits

Tag = s - r = 24 - 14 = 10 bits.

Step 3:
Finish the address structure.



Step 4:
Check the total ( tag + line + word) =main memory address length.
10(tag)+14(line)+3(word)=27(main memory address length).
If yes, mean that you do it correctly.
If no, maybe your calculation in the previous steps got problems.




-Associative Mapping

Main memory = 128 MB = (128 x 1024 ) KB = (128 x 1024 x 1024 ) Bytes
Cache capacity = 128 KB = (128 x 1024 ) Bytes
Block capacity = 8 Bytes

Step 1:
Using "log A(Bytes) / log 2" to convert bytes to bits.
Main Memory = log (128 x 1024 x 1024 ) / log 2 = 27 bits
Lines in Cache = Undetermined
Block Capacity(Size) = 8 Bytes = log (8) / log (2) = 3 bits.

Step 2:
Find the value of address structure.



s (tag) = main memory address length - block size , w = block size.
w = 3
s = 27 -3
s = 24

Step 3:
Finish the address structure.


Step 4:
Check the total ( tag + word) =main memory address length.
24(tag)+3(word)=27(main memory address length).
If yes, mean that you do it correctly.
If no, maybe your calculation in the previous steps got problems.


-'K' Way Set Associative Mapping
Main memory = 128 MB = (128 x 1024 ) KB = (128 x 1024 x 1024 ) Bytes
Cache capacity = 128 KB = (128 x 1024 ) Bytes
Block capacity = 8 Bytes

Let's say this is the 4 way set associative mapping.
So, k = 4.


Step 1:
Using "log A(Bytes) / log 2" to convert bytes to bits.

Main Memory = log (128 x 1024 x 1024 ) / log 2 = 27 bits

Lines in Cache = Cache capacity / Block Capacity =( 128 x 1024 ) / 8 =( 16 x 1024 ) bytes.

As there is 4 way set associative.
So, number of lines in set (k) is 4 .

Number of sets = d
Number of lines in cache(r) = k * d
So, we want to find the number of sets.
r = k * v,
d = r / k.

Number of sets = ( 16 x 1024 ) bytes / 4  = (4 x 1024 ) bytes
Convert it into bits. 
Number of sets =  (16 x 1024 ) bytes = log ( 4 x 1024 ) / log 2 =12 bits.

Block Capacity(Size) = 8 Bytes = log (8) / log (2) = 3 bits.


Step 2:
Find the value of address structure.


d = number of sets, w = block size(capacity), s = main memory address length - block size.
d = 12 bits 
w = 3 bits
s = 27- 3 = 24 bits
Tag = s - r = 24 - 12 = 12 bits.
Step 3:
Finish the address structure.


Step 4:
Check the total ( tag +set + word) =main memory address length.
12(tag)+12(set)+3(word)=27(main memory address length).
If yes, mean that you do it correctly.
If no, maybe your calculation in the previous steps got problems.




That's the way to solve the mapping function question.
Hope your guys will understand my steps as well.
Thank you. =)


                                                                                                                                  Written by--®æŋ










Mapping Function 3.0

Hi everyone, the Mapping Function 3.0 is arrived !! :D


MF 1.0 is for Direct Mapped function,
MF 2.0 is for Associative Mapped function.

So, MF 3.0 is for.....

Ya, MF 3.0 is for SET ASSOCIATIVE MAPPING FUNCTION !!!


As the name of this mapping function, the cache is divided into a number of sets.
And for each set, they contains a number of lines.
If that is 2 lines in one set, that's called 2 way set associative mapping.





Actually, we can mention that, the address structure of the direct mapped is almost same as the set associative mapping function.
The only difference is between the tag and word, for direct mapped is LINES ; for set associative mapping is SET.


Summary
Address Length = (s+w) bits
Block Size = Line Size = 2^(w) bytes/words
Number of blocks in memory = 2^(d)
Number of Lines in set = k ( k way set associative mapping)
Number of sets = v = 2^(d)
Number of lines in cache = k*v = k * 2^(d)
のSize of tag = (s-d) bits



Hooray~~
Finally, i finish all the mapping function.
For the new coming post, i will post how to solve the mapping question as well.
So, see your guys at next post~~

╭⌒╮¤      `  
╭╭ ⌒╮ ●╭○╮   
╰ ----╯/█∨█\  
 ~~~~~~~~∏~~~∏~~~~~~~~~~~


Youtube:K way set associative mapped function

                                                                                                               Written by--®æŋ

Mapping Function 2.0

Hello guys,

Let's us continues the mapping function 2.0  :)

Associative Mapping
-The replacement policy is free to choose any entry in the cache to hold the copy.
-Associative is a trade-off.
-There is examined to match every line's tag.
-Cache searching is expensive.





From the diagram, fully associative cache has more choice than direct mapped cache. As the direct mapped only can be link between cache and memory once.



















Associative Mapping Address Structure




Fully Associative Mapping Cache Organization





Summary:

Ⅰ-Address Length (Memory Length)= (s+w ) bits
Ⅱ-Block Size=Line Size=2^(w) bytes/Words
Ⅲ-Number of Lines in Cache = Undetermined
Ⅳ-Number of block in Memory = 2^(s)
Ⅴ-Size of Tag = 's' bits


Youtube:Fully Associative Mapped Function
To be continues again.... :)

                                                                                                                                             Written by--®æŋ

Mapping Function 1.0

Hi~~ How are your guys?

Today, I want to show you what is Mapping Function.


First,
 Mapping function can be determine as 3 topic below:
a)Direct Mapping
b) Associative Mapping
c) Set Associative Mapping

For examples,

Cache of 64K Bytes,
Cache block of 4 Bytes.
That's mean cache is organized as
(64/4)KB Lines of 4 bytes each.

Or,

16 MBytes main memory,
Main memory consists of
(16/4)MB Lines of 4 bytes each.

From the solutions at above,
Lines is equal to capacity of cache/main memory divide by the capacity of cache block.



Direct Mapping



-Each block of main memory maps to ONLY one cache line.
-Address is in 2 parts.
-Least Significant 'w' bits identify unique word/byte within a block of main memory.
-Most Significant 's' bits specify one of 2 power of 's' memory block.








Direct Mapping Address Structure



Direct Mapping Cache Organization




Well...Let me do a summary for the direct mapping..

According to the all diagram above,

We can know that:
a)Address length = (s+w) bits.
b)Block size=Line size= 2^(w) words or bytes.
c)Number of blocks in main memory = 2^(s)
d)Number of lines in cache = m = 2^(r)
e)Size of tag = (s-r) bits


Direct Mapping is simple and inexpensive but its fixed location for given block.


Youtube:Direct mapped function


To be continues....





                                                                                         Written by--®æŋ




Cache

Good Day to everyone =)

For this post is only discuss about the Cache. So that will not be so long post for your guys to reading =)

What is Cache?
Everyone only know the words of "Cache" or heard it before, but they might be do not know what is meant of the Cache and how it working inside the computer.

For today, i want to explain to your guys the "Cache",how it working and also the structure of cache and main memory :)


Cache is a small amount of fast memory. It is located and intermediate buffer between the CPU (Control Process Unit) and the normal Main Memory. Furthermore, cache is contain a copy of portions of main memory.


























Cache Read Operation






Generate the RA(Reference Address) of word to be read.
Check if the word is inside the cache.
-If it is, deliver the word to processor.( Known as Cache Hit, it's fast)
-If not, block of main memory read into cache.( Known as Cache Miss, it's slow)
Locality of reference principle applies
-Future references likely to other words in block read into cache. 
















Typically Cache Organization



Cache Structure & Main Memory Structure





















In addition
that's no matter for the size of the cache.
But the more cache require, then that's more expensive.
Of course, the speed of cache is same as car engine.
More powerful car engine, the speed more faster.
More cache inside process, the speed also more faster
Last but not least, the larger cache has a larger gates involve-slow down
and take more time to check the data.


Well...  
I think your guys should gain some new information of cache 
after viewed my posted, right? xD

But remember that, "One is never too old to learn". 
For more information of cache, 
you may go to search it from the wiki-pedia or google by yourself. =)
Anyway,thanks for reading my posted. 
Have a nice day ! see ya~~







                                                                                                                                           Written by--®æŋ