Beyond Code & Karma · Data Structures & Algorithms
Every structure you write encodes a consequence. Every algorithm you run plays out a cosmic law. This is DSA — explained through karma.
The Foundation
Before we write a single line of code.
In Sanskrit, karma simply means action. Not punishment. Not fate. Just — action, and its inevitable consequence. The Bhagavad Gita frames it plainly: every act sets a chain of effects into motion, and the universe is nothing but the sum total of all these chains running in parallel.
Data Structures and Algorithms are the same thing, written in code. When you push onto a stack, you are committing an action. When you insert into a tree, you are altering the shape of all future searches. When you recurse, you are creating a chain that must unwind itself before anything else continues. DSA is not abstract theory. It is karma — made executable.
The same is true in DSA. You choose your structure and algorithm. Time and space complexity are the fruits — and the machine collects them, always, precisely, without opinion.
01 / Arrays & Strings
Contiguous. Ordered. Indexed by the cosmos.
In Vedic cosmology, the Akasha is the primordial space — the medium through which all things exist, ordered and retrievable by position. It is the substrate of reality itself: a flat, continuous expanse where every element has a fixed coordinate in the cosmos.
An array is exactly this. A block of contiguous memory — every element at a known offset from the first. No searching required to reach index i. You know its address because the universe already knows where everything is. O(1) access is not a trick of engineering — it is the nature of akashic knowledge. You are not searching. You are remembering.
You declare the intent and the size. int arr[n] is a sankalpa — a vow made to memory. The space is reserved whether you fill it or not. Karma begins at declaration.
Reads are weightless — O(1), instant, frictionless. But insertions and deletions in the middle carry cost. To insert at index k, every element after it must shift. The structure remembers every displacement you caused.
Cache-friendly traversal is the reward for contiguous memory. Arrays are the fastest structure for sequential scans — because the CPU's prefetcher can predict exactly where you are going next. Good karma, earned at design time.
Past and future converging toward the centre — balance achieved when they meet.
// Two Pointer — the karma of balance
// Meeting in the middle: past and future converging
function isPalindrome(s) {
let left = 0, right = s.length - 1;
while (left < right) {
if (s[left] !== s[right]) return false; // imbalance
left++; // karma moves inward from the past
right--; // karma moves inward from the future
}
return true; // balance achieved — moksha
} // Sliding Window — present-moment awareness
// Never leaping ahead, never clinging to what passed
function maxSumWindow(arr, k) {
let windowSum = arr.slice(0, k).reduce((a, b) => a + b, 0);
let maxSum = windowSum;
for (let i = k; i < arr.length; i++) {
windowSum += arr[i] - arr[i - k]; // release the past, receive the present
maxSum = Math.max(maxSum, windowSum);
}
return maxSum;
} | Operation | Complexity | Karma Equivalent |
|---|---|---|
| Access by index | O(1) | Akashic knowledge — instant recall |
| Search (unsorted) | O(n) | Scanning all past actions |
| Insert at middle | O(n) | Displacing many to make room for one |
| Insert at end | O(1) amortised | Adding to the present moment |
| Delete from middle | O(n) | Every consequence must shift |
Because arrays are contiguous. Every element after the insertion point must physically move one slot to the right — there is no free space to borrow. You must displace. In karmic terms: you cannot insert a new act into the middle of your past without shifting the meaning of everything that followed.
A static array declares a fixed karmic contract at birth — you cannot renegotiate its size. A dynamic array doubles its allocated space when full, copying all elements to a new block. This doubling is called amortisation — the cost of growth is spread across all future insertions, making each one O(1) on average.
A string is an array of characters with the added constraint of immutability in many languages. In Python and JavaScript, s + t creates a new string — the past is immutable. Only new karma can be created, never old karma rewritten. This is not a limitation; it is dharma.
02 / Linked Lists
Each node knows only one thing: what comes next.
The Vedic concept of Krama — sequential progression — holds that existence moves through a chain of causes and effects, each moment pointing to the next, with no guaranteed memory of the one before. You cannot teleport to the middle of your life. You must traverse it, node by node, from the beginning.
Each node holds a value and a pointer — its only knowledge of the world is a single reference to what follows. There is no index. To reach the fifth element, you must pass through the first four. Yet this apparent limitation is also a profound freedom: insertion and deletion are O(1) if you are already there. No shifting required. You change two pointers — and a new karma is seamlessly woven into the chain.
Each node is a discrete moment in time: a value (what happened) and a pointer (what it caused). Nodes do not know their own index. They only know their successor. This is how life actually works — you do not walk through existence with a GPS coordinate. You simply follow what is next.
To reach a node, you must begin at the head and walk the entire chain — O(n), unavoidable, by design. It is not inefficiency; it is respect for sequence. The pilgrim does not teleport to Kashi. They walk every mile.
O(1) insertion anywhere, if you hold the pointer. You redirect two links. The chain reforms around the new node as if it was always there. You do not rewrite the past — but you change what everything past is now connected to.
// Reverse a Linked List — karma's full circle
function reverseList(head) {
let prev = null; // the void before the first action
let curr = head; // the present moment
while (curr) {
let next = curr.next; // remember what comes next
curr.next = prev; // redirect: face what was before
prev = curr; // the past becomes the new present
curr = next; // move forward
}
return prev; // the last becomes first
} // Floyd's Cycle Detection — the tortoise and the hare
// In karma: if a cycle exists, you will keep meeting your past
function hasCycle(head) {
let slow = head; // the deliberate pilgrim
let fast = head; // the rushed one
while (fast && fast.next) {
slow = slow.next; // one step
fast = fast.next.next; // two steps — always rushing
if (slow === fast) return true; // the rushed meets the deliberate in a cycle
}
return false; // no cycle — linear karma, resolves and ends
} | Operation | Complexity | Karma Equivalent |
|---|---|---|
| Access by index | O(n) | Walk every step to get there |
| Insert at head | O(1) | Acting in the present is instantaneous |
| Insert at tail (tail ptr) | O(1) | Appending to the chain, no cost |
| Delete (with prev ptr) | O(1) | Releasing a link — two redirections |
| Search | O(n) | Pilgrimage — no shortcuts |
Use a linked list when your dominant operations are insertions and deletions at known positions, and random access is rare. Queues and stacks built on linked lists avoid costly resizing. The tradeoff: O(n) traversal instead of O(1) access. Choose your karma based on which operations you commit to most.
A node that holds both a next pointer and a prev pointer — awareness of what came before, not just what comes after. This enables O(1) deletion without needing a separate variable, and makes reversal trivial. A being that remembers its past and faces its future. The cost is extra memory per node — wisdom always carries a small overhead.
After Floyd's detects a cycle, you find the entry point by moving one pointer back to head, then walking both one step at a time — they meet exactly at the cycle's entry. In karma: once you recognise you are in a loop, the way out is to return to where the loop began, and consciously redirect from there.
03 / Trees & Graphs
Every choice branches the world. Every node reflects every other.
In the fifteenth chapter of the Bhagavad Gita, Krishna describes the Ashwattha — the sacred cosmic fig tree — with a striking inversion: its roots are above, and its branches below. The source is in the unseen. The manifestation is in the world. Every branch is a consequence of a root; every leaf is a consequence of a branch.
A tree in computer science is the same structure. The root holds the origin. Every node has exactly one parent. Children are the consequences of their parent. The path from root to any node is the complete causal chain that produced it. You cannot reach a node by any path other than the one the tree permits. Dharma, encoded.
In a BST, every insertion obeys one absolute rule: smaller values go left, larger go right. There is no negotiation. You are placed according to what you are relative to everything already there. This is pure dharma — determined by value, not by wish.
BST search is O(log n) — at every node, half of the remaining universe is eliminated. This is why a balanced tree is so powerful: it encodes the most efficient path to any truth, given that truths are ordered.
An unbalanced BST degrades to O(n) — it becomes a linked list. Every choice led the same way, and balance was lost. AVL and Red-Black trees enforce rebalancing — structures with self-correcting karma, designed to prevent the accumulation of imbalance.
class TreeNode {
constructor(val) {
this.val = val;
this.left = null; // lesser karma — actions of lesser weight
this.right = null; // greater karma — actions of greater weight
}
}
function insert(root, val) {
if (!root) return new TreeNode(val); // a new moment arrives
if (val < root.val) root.left = insert(root.left, val);
else if (val > root.val) root.right = insert(root.right, val);
return root;
}
function search(root, val) {
if (!root) return false; // karma does not exist here
if (root.val === val) return true; // found — this moment exists
if (val < root.val) return search(root.left, val);
return search(root.right, val);
} // In-order — yields sorted sequence (the dharmic order)
function inOrder(node) {
if (!node) return;
inOrder(node.left); // resolve all lesser karma first
console.log(node.val); // be present with this moment
inOrder(node.right); // then face the greater
}
// Pre-order — act before knowing consequences
function preOrder(node) {
if (!node) return;
console.log(node.val); // the leader who acts first
preOrder(node.left);
preOrder(node.right);
}
// Post-order — the sage who resolves all before speaking
function postOrder(node) {
if (!node) return;
postOrder(node.left);
postOrder(node.right);
console.log(node.val); // only then, the root speaks
} Indra's Net is one of the most breathtaking metaphors in Hindu and Buddhist cosmology. A vast net stretching infinitely in all directions — and at every vertex, a jewel. Each jewel reflects every other jewel in the net. There is no center. There is no hierarchy. There are only nodes, and the reflections between them.
A graph is Indra's Net. Nodes connected by edges — with no imposed hierarchy, no single root. Edges can be directed (karma flows one way) or undirected (karma is mutual). A graph is the most general structure in DSA — every other structure is a constrained graph. A tree is a connected acyclic graph. A linked list is a linear graph.
// BFS — karma rippling outward, layer by layer
function bfs(graph, start) {
const visited = new Set();
const queue = [start];
visited.add(start);
while (queue.length) {
const node = queue.shift(); // nearest karma first
console.log(node);
for (const n of graph[node]) {
if (!visited.has(n)) { visited.add(n); queue.push(n); }
}
}
}
// DFS — follow one thread to its deepest consequence
function dfs(graph, node, visited = new Set()) {
if (visited.has(node)) return; // already resolved — no repeat
visited.add(node);
console.log(node);
for (const n of graph[node]) dfs(graph, n, visited);
} | Structure | Access | Search | Insert | Karma |
|---|---|---|---|---|
| Array | O(1) | O(n) | O(n) | Akashic — fixed, indexed cosmos |
| Linked List | O(n) | O(n) | O(1)* | Krama — sequential, fluid chain |
| BST (balanced) | O(log n) | O(log n) | O(log n) | Dharmic law — ordered consequence |
| Graph (adj list) | O(1) | O(V+E) | O(1) | Indra's Net — interconnected web |
A DAG is a graph where edges have direction and no path leads back to a starting node. It is the structure of pure causality: A causes B, B causes C, but C does not cause A. Dependency resolution, build systems, course prerequisites — all are DAGs. In karma: the graph of a life in which no old karma loops back to trap you.
An ordering of nodes in a DAG such that for every directed edge (u → v), u comes before v. It answers: in what order must things happen? Given that karma A must precede karma B — what is the valid sequence of all actions? Used in build systems, task schedulers, and course planning. Computed via DFS or Kahn's algorithm (BFS with in-degree tracking).
Dijkstra's finds the shortest weighted path from a source to all other nodes in a non-negative-weight graph. It uses a priority queue (min-heap) to always process the node with the smallest known distance next. In karma: the algorithm that always resolves the most immediate, lightest consequence first — the path of least resistance through a weighted world.
04 / Recursion & Dynamic Programming
The function that calls itself. The solution that remembers.
Samsara — the cycle of birth, death, and rebirth — continues not because there is no escape, but because the base case has not yet been reached. Every iteration is a recursive call with slightly different parameters. The cycle ends when moksha is achieved — the base case that stops the recursion and returns to the caller with final resolution.
Each recursive call waits on the stack — suspended, incomplete — until the call below it resolves. The call stack is the ledger of all unresolved karma. When the base case is reached, every suspended call above it resolves in sequence, unwinding the stack, releasing karma in reverse order of accumulation. Without a base case, recursion is infinite samsara — a stack overflow. Moksha is not optional. It must be explicitly coded.
Every recursive call is an action with a promise: I will return with an answer. The call is pushed onto the stack. It cannot complete until its own recursive calls complete. Each layer is a suspended promise, waiting.
The condition that requires no further recursion. if (n === 0) return 1. This is the moment of liberation — no more calling, no more deferring. An answer is returned, and the unwinding begins.
As the base case resolves, its return value passes to the frame above it, which then resolves, passing its value upward — until the original call receives its final answer. The fruit of every recursive action flows upward through the stack in reverse order of commitment.
// Naive recursion — karma with no memory
// Suffering the same lesson exponentially
function fib(n) {
if (n <= 1) return n; // base case — moksha
return fib(n - 1) + fib(n - 2); // two recursive paths of karma
}
// fib(5) generates ~O(2^n) calls — exponential samsara
// fib(3) computed twice, fib(2) computed three times… Samskara in Hindu philosophy are the deep impressions left by past experiences — the accumulated wisdom of actions already lived. A being with rich samskaras does not need to re-experience every lesson. The wisdom is already there, embedded in the structure of the self.
Memoization is samskara implemented in code. When a subproblem is solved, its answer is stored. The next time the same subproblem arises, the stored answer is returned instantly — no recomputation, no re-suffering. This transforms naive recursion from O(2^n) to O(n). Dynamic Programming formalises this into a bottom-up discipline: build samskara deliberately, from the ground up.
// Memoized recursion — samskara applied
// Each lesson learned once, never re-suffered
function fibMemo(n, memo = {}) {
if (n in memo) return memo[n]; // wisdom already earned
if (n <= 1) return n; // moksha
memo[n] = fibMemo(n-1, memo) + fibMemo(n-2, memo);
return memo[n]; // store before returning
}
// Time: O(n) — each subproblem solved exactly once // Climbing Stairs — bottom-up DP (tabulation)
// How many ways to reach stair n, taking 1 or 2 steps at a time?
function climbStairs(n) {
if (n <= 2) return n;
const dp = new Array(n + 1);
dp[1] = 1; // one way to reach stair 1
dp[2] = 2; // two ways to reach stair 2
for (let i = 3; i <= n; i++) {
dp[i] = dp[i-1] + dp[i-2]; // the present is the sum of two recent pasts
}
return dp[n];
}
// This IS Fibonacci. Stair paths and recursive sequences
// share the same underlying structure. Samsara and DSA are
// the same pattern, wearing different names. Memoization (top-down) starts with the original problem and recurses downward, storing results as it goes — only computing what is needed. Tabulation (bottom-up) starts with base cases and builds upward through all subproblems. Tabulation avoids recursion stack overhead; memoization avoids computing unused subproblems. Choose based on which direction is more natural for the problem.
Look for two signals: overlapping subproblems (the same calculation needs to happen multiple times) and optimal substructure (the optimal solution can be built from optimal solutions to its parts). Classic patterns: Fibonacci, 0/1 knapsack, longest common subsequence, coin change, edit distance.
Every recursive call is a new frame pushed onto the call stack — containing local variables, parameters, and a return address. Frames are popped in LIFO order as calls resolve. Deep recursion risks stack overflow because the stack has finite size. Tail-call optimisation (supported in some environments) reuses the current frame instead of creating a new one, eliminating stack growth for tail-recursive functions.
05 / Karma Koans
Four questions. One right answer each. Phala awaits.
A linked list cycle is detected using Floyd's algorithm. The slow pointer moves 1 step; the fast pointer moves 2. If a cycle exists, what happens?
Which traversal of a Binary Search Tree yields elements in sorted ascending order?
What is the time complexity of naive (non-memoized) recursive Fibonacci — fib(n)?
fib(n) makes two recursive calls. The call tree roughly doubles at each level — approximately 2ⁿ total calls. Memoization collapses this to O(n) by ensuring each unique value of n is computed exactly once. Exponential samsara → linear progress.
In a weighted graph, Dijkstra's algorithm uses which data structure to efficiently select the next node to process?
The Enlightenment
Every data structure is a set of committed rules — rules about how actions (operations) relate to consequences (time and space complexity). You do not choose a structure for aesthetic reasons. You choose it because its dharmic law matches the pattern of your problem.
The algorithm is karma in motion. The data structure is the dharma that shapes how that karma unfolds. Master both, and you do not just write code — you design consequence.