Skip to content

Commit fffdd79

Browse files
committed
Quartz sync: Jan 25, 2026, 8:08 PM
1 parent c05129a commit fffdd79

12 files changed

Lines changed: 324 additions & 31 deletions

File tree

Lines changed: 189 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,189 @@
1+
>[!SUMMARY] Table of Contents
2+
>- [[Cache Organization#Locality of Reference|Locality of Reference]]
3+
>- [[Cache Organization#Working of Cache Memory|Working of Cache Memory]]
4+
> - [[Cache Organization#Block|Block]]
5+
> - [[Cache Organization#Average Memory Access Time|Average Memory Access Time]]
6+
>- [[Cache Organization#Types of Cache Access|Types of Cache Access]]
7+
> - [[Cache Organization#Simultaneous Access|Simultaneous Access]]
8+
> - [[Cache Organization#Hierarchical Access|Hierarchical Access]]
9+
> - [[Cache Organization#Memory Access Time when Locality of Reference is used|Memory Access Time when Locality of Reference is used]]
10+
>- [[Cache Organization#Cache Write or Write Propagation|Cache Write or Write Propagation]]
11+
> - [[Cache Organization#Write Through|Write Through]]
12+
> - [[Cache Organization#Write Back|Write Back]]
13+
> - [[Cache Organization#Write Miss|Write Miss]]
14+
> - [[Cache Organization#Write Allocate|Write Allocate]]
15+
> - [[Cache Organization#No Write Allocate|No Write Allocate]]
16+
>- [[Cache Organization#Questions|Questions]]
17+
# Locality of Reference
18+
Programs tend to access the same memory location or nearby memory locations within short intervals of time.
19+
20+
Types -
21+
1. **Temporal Locality -** Recently accessed memory locations are likely to be accessed again.
22+
2. **Spatial Locality -** Memory locations close to the recently accessed memory are likely to be accessed.
23+
3. **Sequential Locality -** Access in strictly increasing order of address.
24+
25+
This phenomenon allows for caching a **block of memory** to be efficient. Currently demanded localities are kept in a smaller and faster memory called **cache**.
26+
# Working of Cache Memory
27+
![[Pasted image 20260125131640.png]]
28+
29+
Keywords -
30+
1. Cache Hit - When demanded CPU content is present in Cache.
31+
2. Cache Miss - When demanded CPU content is absent in Cache.
32+
3. Hit Ratio $(H)$ - Fraction of times a Cache Hit occurs in all memory references.
33+
34+
$$
35+
H = \frac{\text{No. of hits}}{\text{No. of memory references}}
36+
$$
37+
## Block
38+
More specifically, a block is a fixed-size contiguous group of memory words transferred between the main memory and cache memory as a single unit.
39+
40+
***Example -*** Whenever a Cache Miss occurs, the CPU retrieves the content from the Main Memory itself. But because locality of reference is known, a **neighbourhood around that content** is brought to the cache to make future memory accesses **more efficient**. This neighbour is a block.
41+
## Average Memory Access Time
42+
Both a Cache Miss and a Cache Hit take some time for performing the content transfer. So the average memory access time is -
43+
44+
$$
45+
\text{Avg. Mem. Access Time} = H*(\text{Time for cache hit}) + (1-H)*(\text{Time for cache miss})
46+
$$
47+
48+
^a692b1
49+
# Types of Cache Access
50+
## Simultaneous Access
51+
The memory access request is **sent to both** the cache memory as well as the main memory. Hence this is also called a **parallel access**.
52+
53+
![[Pasted image 20260125134951.png]]
54+
55+
The average memory access time is -
56+
57+
$$
58+
T_{avg} = H*T_{cm} + (1-H)*T_{mm}
59+
$$
60+
61+
Here $T_{cm}$ is the cache memory access time and $T_{mm}$ is the main memory access time.
62+
## Hierarchical Access
63+
The memory access request is sent to the main memory **only when a Cache Miss** occurs. Hence this is also called as **serial access**.
64+
65+
The average memory access time is -
66+
67+
$$
68+
\begin{aligned}
69+
T_{avg} &= H*T_{cm} + (1-H)*(T_{cm}+T_{mm}) \\[8pt]
70+
&= \cancel{H*T_{cm}} + T_{cm}+T_{mm} \cancel{- H*T_{cm}} + H*T_{mm} \\[8pt]
71+
&= T_{cm} + (1+H) * T_{mm}
72+
\end{aligned}
73+
$$
74+
75+
The additional $T_{cm}$ in the second term of the formula is called the **"cache search/lookup time"** because in the case of a cache miss the cache access is not for retrieving content from cache but to check for its existence.
76+
77+
Cache search time is **zero** in the case for parallel access.
78+
79+
In such a memory organization -
80+
- The Cache Memory is also called as the **Top Level memory**.
81+
- The Main Memory is also called as the **Bottom Level memory**.
82+
<h4 class="special">When to use which formula for Avg. Memory Access time?</h4>
83+
If a question has the words "cache memory access time" and "main memory access time" mentioned, only then move onto using the formulas for Simultaneous or Hierarchical access.
84+
- If "Hierarchy" or "Level" is mentioned, we are dealing with Hierarchical Access.
85+
- Else we are dealing with Simultaneous Access.
86+
87+
Otherwise, if the question just mentions "time for cache hit" and "time for cache miss", use the [[Cache Organization#^a692b1|generic formula]].
88+
89+
See [[Cache Organization#^q1|Question 1]] for a simple example.
90+
## Memory Access Time when Locality of Reference is used
91+
In the previous cases we were just looking at the cases where on a Cache miss we retrieve the data directly from the main memory. But on a cache miss, the block in which the data belongs to needs to be brought in the cache memory for future usage as well.
92+
93+
Let the block transfer time be $T_{bt}$.
94+
95+
Then,
96+
1. Simultaneous Access - $T_{avg} = H*T_{cm} + (1-H)*T_{bt}$
97+
2. Simultaneous Access - $T_{avg} = T_{cm} + (1-H)*T_{bt}$
98+
# Cache Write or Write Propagation
99+
Write propagation means that, if the CPU performs a write operation on some data in the cache memory, then that same data should also be updated in the main memory.
100+
101+
![[Pasted image 20260125163924.png|550]]
102+
## Write Through
103+
If the CPU performs a write operation in the cache, it performs a write operation in the main memory **simultaneously/parallelly**. The data in the cache and the main memory is updated together.
104+
105+
- **Pro -** No inconsistency between the content in the cache memory and content in the main memory.
106+
- **Con -** Time consuming because write operation is performed on the main memory irrespective of hit or miss in the cache.
107+
108+
Because the cache and main memory are accessed simultaneously, the cache memory would use **simultaneous access**. Thus, the time required for one read and one write operation is -
109+
110+
$$
111+
\begin{aligned}
112+
T_r &= H*T_{cm} + (1-H)*T_{mm} \\[8pt]
113+
T_{w} &= \operatorname{max}(T_{cm}, T_{mm}) = T_{mm} \\[8pt]
114+
T_{avg} &= \text{\% of read operations} * T_r + \text{\% of write operations} * T_w \\[8pt]
115+
\text{Eff. Hit Ratio} &= \text{\% of hit read operations} = \text{\% of read operations} * H
116+
\end{aligned}
117+
$$
118+
119+
Out of all cases of read/write operations hitting/missing, the cache memory is sufficient by itself only when a read-hit occurs. For rest of the cases the main memory is involved too. This causes a **lower effective hit ratio** for Write Through cache when compared with Write Back cache.
120+
## Write Back
121+
If the CPU performs a write operation in the cache, the same content in the main memory is not updated simultaneously. Instead the content in the main memory is updated when a block is replaced in the cache memory.
122+
123+
- **Pro -** Time saving compared to Write-Through.
124+
- **Con -** Inconsistency between the content in cache memory and main memory.
125+
126+
Any block that has been written over is called a **dirty/modified block**. For any block in the cache memory -
127+
1. If no write was performed on that block - Directly replace the block without any write in main memory.
128+
2. If write was performed on the block (If it's a dirty block) - Perform write back for the block.
129+
130+
Unlike Write Through cache, there's no requirement of Write Back caches requiring strictly Simultaneous or strictly Hierarchical access. The average memory access time is -
131+
132+
1. Simultaneous Access -
133+
134+
$$
135+
T_{avg} = H*T_{cm} + (1-H)*(T_{bt} + \text{write back time})
136+
$$
137+
138+
2. Hierarchical Access -
139+
140+
$$
141+
\begin{aligned}
142+
T_{avg} &= H*T_{cm} + (1-H)*(T_{cm} + T_{bt} + \text{write back time}) \\[8pt]
143+
&= T_{cm} + (1-H) * (T_{bt} + \text{write back time})
144+
\end{aligned}
145+
$$
146+
write back time is -
147+
148+
$$
149+
\text{write back time} = \text{fraction of dirty blocks} * T_{bt}
150+
$$
151+
## Write Miss
152+
Write miss are handled in two ways, by using Write Allocate or by using No Write Allocate.
153+
### Write Allocate
154+
On a write miss the block is loaded into the cache and then written in the cache itself. Usually used with [[Cache Organization#Write Back|Write Back]] cache.
155+
156+
Write Back cache with Write Allocate -
157+
1. Read -
158+
- Hit - CPU reads content from cache.
159+
- Miss - CPU reads content from main memory and brings the missing block to the cache memory by replacing an existing block **if needed**. If a dirty block is replaced, write-back to the main memory.
160+
2. Write -
161+
- Hit - Perform write in cache.
162+
- Miss - Bring the missing block to the cache memory by replacing an existing block **if needed** and then perform the write operation on it in the cache. If a dirty block is replaced, write-back to the main memory.
163+
### No Write Allocate
164+
On a write miss the block is written in the main memory and not loaded into the cache. Used with [[Cache Organization#Write Through|Write Through]] cache.
165+
166+
Write Through cache with No Write Allocate -
167+
1. Read -
168+
- Hit - CPU reads content from cache.
169+
- Miss - CPU reads content from main memory and brings the missing block to the cache memory by replacing an existing block **if needed**.
170+
2. Write -
171+
- Hit - Perform write in cache and main memory simultaneously.
172+
- Miss - Perform write in main memory but do not bring missing block to the cache.
173+
174+
---
175+
# Questions
176+
^q1
177+
<h6 class="question">Q1) If in a two level memory hierarchy, the top level memory access time is 8ns and the bottom level memory access time is 60ns, the hit-rate required is __ for the average access time to be 10ns. What is __?</h6>
178+
179+
$\underline{\text{Sol}^n} -$
180+
Here as "memory hierarchy" and "two level" is mentioned, we are dealing with a hierarchical access cache organization. So,
181+
182+
$$
183+
\begin{alignedat}{3}
184+
&&10 &= 8 + (1-H)*60 \\[8pt]
185+
&\Rightarrow&\,\,10&= 8 + 60 - 60H \\[8pt]
186+
&\Rightarrow&\,\,60H&= 58 \\[8pt]
187+
&\Rightarrow&H&= \boxed{0.967} \\[8pt]
188+
\end{alignedat}
189+
$$
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
Content V/S Data -
2+
1. Data - What the programs ultimately wants to use.
3+
2. Content - Any value stored in the memory.
4+
# Associative Memory
5+
Associative memory retrieves data **by content instead of by address**. Hence it is also known as **content addressable memory**.
6+
7+
Unlike an addressable memory where the CPU supplies an address, in associative memory the CPU supplies a **search key**. Each cell in this memory holds -
8+
1. **Key/Tag -** What CPU knows
9+
2. **Value -** What CPU wants
10+
11+
How it works -
12+
1. CPU sends a search key.
13+
2. The search key is matched with the keys of all cells **simultaneously in parallel**.
14+
3. The cell with the matching search key is identified.
15+
4. The **associated value** of this cell is sent back to the CPU.
16+
5. If multiple cells match with the search key, the associated value for both is sent.
17+
18+
This memory is extremely fast, much faster than SRAM, but because the hardware requires the ability to compare keys of all cells in the memory simultaneously it is also very expensive.
19+
20+
Uses -
21+
1. Caching
22+
2. TLB (Translation Lookaside Buffer) in OS
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
1. [[Group Theory#Algebraic Structure|Algebraic Structure]]
2+
2. [[Order Theory#Lattice|Lattice]]
3+
3. [[Group Theory#Semi-group|Semi-group]]
4+
4. [[Group Theory#Monoid|Monoid]]
5+
5. [[Group Theory#Group|Group]]
6+
6. [[Group Theory#Abelian Group|Abelian Group]]
7+
7. [[Vectors and Vector Spaces#Field|Field]]

content/Discrete Maths/Set Theory.md

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
>[!SUMMARY] Table of Contents
22
>- [[Set Theory#Set|Set]]
3+
> - [[Set Theory#Mutually Exclusive Sets|Mutually Exclusive Sets]]
4+
> - [[Set Theory#Collectively Exhaustive Sets|Collectively Exhaustive Sets]]
35
>- [[Set Theory#Set Operations|Set Operations]]
46
>- [[Set Theory#Principle of Inclusion and Exclusion|Principle of Inclusion and Exclusion]]
57
>- [[Set Theory#Multiset|Multiset]]
@@ -10,7 +12,7 @@
1012
> - [[Set Theory#Reflexive Relation |Reflexive Relation ]]
1113
> - [[Set Theory#Irreflexive Relation|Irreflexive Relation]]
1214
> - [[Set Theory#Symmetric Relation|Symmetric Relation]]
13-
> - [[Set Theory#Anti-Symmetric Relation|AntiSymmetric Relation]]
15+
> - [[Set Theory#Anti-Symmetric Relation|Anti-Symmetric Relation]]
1416
> - [[Set Theory#Asymmetric Relation|Asymmetric Relation]]
1517
> - [[Set Theory#Transitive Relation|Transitive Relation]]
1618
> - [[Set Theory#Equivalence Relation|Equivalence Relation]]
@@ -52,7 +54,18 @@ $$
5254
&= \boxed{2^n}
5355
\end{aligned}
5456
$$
57+
## Mutually Exclusive Sets
58+
A collection of sets $A_1, \dots, A_k$ is called mutually exclusive iff,
5559

60+
$$
61+
A_i \cap A_j = \phi,\forall i\ne j
62+
$$
63+
## Collectively Exhaustive Sets
64+
A collection of sets $A_1, \dots, A_k$ is called collectively exhaustive iff,
65+
66+
$$
67+
\bigcup_{i=1}^k A_i = U \qquad(U \text{ is the universal set})
68+
$$
5669
# Set Operations
5770
1. Union
5871
2. Intersection
@@ -214,6 +227,10 @@ $\qquad(OR)$
214227

215228
A partition of a set $A$ is the grouping of all elements of $A$ into **non-empty subsets** such that every element only occurs in one subset.
216229

230+
$\qquad(OR)$
231+
232+
A partition is a collection of subsets of $A$ such that the subsets are both [[Set Theory#Mutually Exclusive Sets|mutually exclusive]] and [[Set Theory#Collectively Exhaustive Sets|collectively exhaustive]].
233+
217234
1. Partition of $A=\phi=\{\}$ is $\phi=\{\}$.
218235

219236
Check [[Set Theory#^q5|Question 5]] to see how to count the number of partitions of a set.

content/Mathematical Foundations of Generative AI.md

Whitespace-only changes.

content/Mathematical Foundations of Machine Learning I/Linear Algebra/Vectors and Vector Spaces.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,16 @@
1+
>[!SUMMARY] Table of Contents
2+
>- [[Vectors and Vector Spaces#Vector|Vector]]
3+
>- [[Vectors and Vector Spaces#Field|Field]]
4+
>- [[Vectors and Vector Spaces#Vector Space|Vector Space]]
5+
> - [[Vectors and Vector Spaces#Subspaces|Subspaces]]
6+
>- [[Vectors and Vector Spaces#Linear Combinations|Linear Combinations]]
7+
> - [[Vectors and Vector Spaces#Affine Combination|Affine Combination]]
8+
>- [[Vectors and Vector Spaces#Linear Dependence|Linear Dependence]]
9+
>- [[Vectors and Vector Spaces#Span, Basis, and Dimension|Span, Basis, and Dimension]]
10+
> - [[Vectors and Vector Spaces#Span|Span]]
11+
> - [[Vectors and Vector Spaces#Basis|Basis]]
12+
> - [[Vectors and Vector Spaces#Uniqueness of Representation Theorem|Uniqueness of Representation Theorem]]
13+
> - [[Vectors and Vector Spaces#Dimension|Dimension]]
114
# Vector
215
What is a vector?
316
- **The Physics definition -** A vector is a quantity with both magnitude and direction.
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
Probability is the study of certainty/uncertainty around any decision or action.
2+
# Random Experiment
3+
A random experiment is an experiment with a known set of outcomes, but the outcome of a trial is unknown before the trial is conducted.
4+
- This set of all possible outcomes of a random experiment is called the **sample space**.
5+
- **An event** is a subset of the sample space that is of our interest.
6+
7+
**Parallels to Set Theory -**
8+
9+
| [[Set Theory]] | Probability Theory |
10+
| :------------: | :----------------: |
11+
| Universal Set | Sample Space |
12+
| Subset | Event |
13+
| Singleton Set | Outcomes |
337 KB
Loading
190 KB
Loading
300 KB
Loading

0 commit comments

Comments
 (0)