COA Sample Paper | MST 2
COA Sample Paper 1
5*2 = 10 Marks
1. What is memory hierarchy and how is it organized?
Memory hierarchy refers to the organization of different types of memory in a computer system. The hierarchy typically consists of several levels, each with different performance characteristics and cost. The levels include cache memory, main memory, and secondary storage devices such as hard drives.
2. How does cache memory work and what is its purpose in a computer system?
Cache memory is a small, fast memory that stores frequently accessed data and instructions. It sits between the CPU and main memory, and its purpose is to improve the performance of the system by reducing the number of times the CPU has to access main memory.
3. What is associative memory and how does it differ from traditional memory systems?
Associative memory is a type of memory that allows data to be accessed based on its content rather than its address. It differs from traditional memory systems, which use address-based access. Associative memory is often used in cache memory systems to speed up access to frequently used data.
4. How does cache size relate to block size and what are the trade-offs between larger or smaller cache and block sizes?
Cache size and block size are two important parameters in a cache memory system. Larger cache sizes generally provide better performance, as more data can be stored in the cache. However, larger block sizes can lead to more cache misses and slower performance. Smaller block sizes can reduce the number of cache misses but can also increase the overhead of managing the cache.
5. What are mapping functions and replacement algorithms in cache memory systems?
Mapping functions determine how data is mapped from main memory to cache memory. There are several types of mapping functions, including direct mapping, fully associative mapping, and set associative mapping. Replacement algorithms determine which data is replaced in the cache when the cache is full. Popular replacement algorithms include least recently used (LRU) and random replacement.
2*5= 10 Marks
1. Explain the purpose of cache memory in a computer system and how it improves system performance.
Cache memory is a high-speed memory that stores frequently accessed data and instructions. Its purpose is to improve the performance of the computer system by reducing the number of times the CPU has to access main memory. When the CPU requests data, the cache first checks if the data is present in the cache. If it is, the cache provides the data to the CPU without having to retrieve it from main memory. This reduces the access time and improves the overall system performance.
2. Describe the difference between direct mapping, fully associative mapping, and set associative mapping in cache memory systems.
Direct mapping is a type of cache mapping where each block of main memory can only be mapped to one specific block in the cache. Fully associative mapping allows any block of main memory to be mapped to any block in the cache. Set associative mapping is a compromise between the two, where each block of main memory can be mapped to a set of blocks in the cache. Direct mapping is simpler to implement but may lead to a higher rate of cache misses, while fully associative mapping is more complex but can reduce the number of cache misses.
3. What is a write policy in cache memory and how does it affect system performance?
Write policy is a cache memory management technique that determines when data is written back from the cache to main memory. Write-through writes data to main memory immediately after it is written to the cache, while write-back writes data to main memory only when the block is evicted from the cache. Write-back can provide better performance but can also lead to data inconsistency if the system crashes or loses power.
4. What are some basic optimization techniques used in cache memory systems?
Basic optimization techniques in cache memory systems include increasing cache size, using a larger block size, using a set-associative mapping scheme, and implementing a smart replacement algorithm like LRU. Increasing the cache size allows more data to be stored in the cache, which reduces the number of cache misses. Using a larger block size can reduce the overhead of managing the cache. Set-associative mapping can provide a balance between direct mapping and fully associative mapping. LRU is a replacement algorithm that removes the least recently used data from the cache, which can improve the hit rate.
5. What is virtual memory and how does it work? Describe the differences between paging and segmentation.
Virtual memory is a technique used by operating systems to allow programs to use more memory than is physically available in the system. It works by temporarily moving data from main memory to secondary storage devices such as a hard drive. The process of moving data between main memory and secondary storage is called paging or segmentation. Paging divides the memory into fixed-size pages and stores them in secondary storage. Segmentation divides memory into variable-sized segments and stores them in secondary storage. The operating system manages the movement of data between main memory and secondary storage to optimize system performance.