Intended as a personal learning exercise to help knock off some rust with Algorithms, Data Structures, and Big O time/space complexity analysis while solidifying my knowledge of TypeScript and Mutation Testing (via Stryker). Work in progress.
Note: Certain data structures (e.g. Queues and Stacks) are implemented using JavaScript's array type (Array<>
in TypeScript), which is implemented as a dynamic array.
Data Structure | Access (by Index) | Search (by Value) | Insert | Remove |
---|---|---|---|---|
Linked List1 | O(n) | O(n) | O(1) | O(1) |
Array | O(1) | O(n) | O(n) | O(n) |
Hash Table2 | - | O(n) | O(n) | O(n) |
Skip List | O(n) | O(n) | O(n) | O(n) |
Self-Balancing Binary Search Tree3 | - | O(log(n)) | O(log(n)) | O(log(n)) |
For those data structures where it differs from the worst-case.
Data Structure | Access (by Index) | Search (by Value) | Insert | Remove |
---|---|---|---|---|
Hash Table4 | - | O(1) | O(1) | O(1) |
Skip List5 | O(log(n)) | O(log(n)) | O(log(n)) | O(log(n)) |
AVL Tree | - | O(log(n)) | O(log(n)) | O(log(n)) |
Red-Black Tree | - | O(log(n)) | O(1) | O(1) |
For those data structures with notable space complexity and memory characteristics.
Data Structure | Description |
---|---|
Hash Table | A higher frequency of hash function collisions results in greater memory usage and possible memory fragmentation due to the increased reliance on linked lists to store data. Specifying a larger table size can reduce collisions if a good hash function is used, but also has its own memory cost. The optimal table size depends on the application6. |
Skip List | Controlled by the probability function. There is a trade-off between expected-case time complexity and space complexity. More frequent promotions mean a deeper list structure but also faster (on average) lookups. |
Stacks and queues are implemented using either JavaScript arrays or Singly Linked Lists. In both cases, the time complexity of the stack and queue operations (push
, pop
, top
/first
aka peek, add
aka enqueue, remove
aka dequeue, isEmpty
, length
) are O(1). A counting technique is used across relevant implementations to achieve a length
method that runs in O(1) time.
There is marginally more memory used in the array implementations compared to the Singly Linked List implementations, because some array space is reserved during insertion operations since JavaScript arrays are dynamic arrays. However, the array implementations may perform better than the linked list implementations for certain hardware setups due to their contiguous memory structure, especially when vast numbers of elements are stored and processed in a complex program while the machine is under significant load.
- Weighted Graph (Adjacency List and Adjacency Matrix implementations)
- Tree
- Trie
- B-Tree
- Binary Tree
- Red-Black Tree
- AVL Tree
- Binary Heap
- Queue
- Double-Ended Queue
- Priority Queue (using Heap)
- Double-Ended Priority Queue
- Stack
- (Singly) Linked List
- Doubly Linked List
- Indexed Skip List
- Disjoint-Set
- Multiset (aka Bag)
- Hash Table
Time Complexity
Algorithm | Best-Case | Expected-Case | Worst-Case |
---|---|---|---|
Linear Search | O(1) | O(n) | O(n) |
Binary Search | O(1) | O(log(n)) | O(log(n)) |
Rabin-Karp Substring Search7 | O(s+b) | - | O(s*b) |
Insertion Sort | O(n^2) | - | O(n^2) |
Bubble Sort | O(n^2) | - | O(n^2) |
Merge Sort | O(n*log(n)) | - | O(n*log(n)) |
Radix Sort8 | O(k*n) | - | O(k*n) |
- Linear Search
- Binary Search
- Rabin-Karp Substring Search
- Depth-First Search on a Graph (recursive and iterative)
- Breadth-First Search on a Graph (recursive and iterative)
- Binary Search (Binary Search Tree)
- Binary Tree Traversal: In-Order, Post-Order, and Pre-Order
- Merge Sort
- Quick Sort
- Radix Sort
- Bubble Sort
- Selection Sort
- Heap Sort
- External Sort (of large collections using Double Ended Priority Queue)
- Topological Sort of a Directed Graph: Kahn's Algorithm
- Christofides Algorithm
- Dijkstra's Algorithm
- A* Algorithm
- Bellman-Ford Algorithm
- Floyd-Warshall Algorithm
- Minimum Spanning Forest: Kruskal's Algorithm
- McDowell, Gayle Laakmann. Cracking the Coding Interview: 150 Programming Questions and Solutions. 6th ed., CareerCup, LLC, 2020.
- W3Schools Data Structures and Algorithms (DSA) Tutorial
- The TypeScript Handbook
Footnotes
-
Insert and remove operations are evaluated separately from the search operation. ↩
-
Assuming a poor hash function and/or too small of a table size, resulting in many collisions (the number of list items per bin is proportional to the total number of items stored in the table). Also assumes that collisions are handled using a Singly Linked List as implemented in the code. ↩
-
Includes AVL Trees and Red-Black Trees. ↩
-
Assuming a good hash function is used with sufficient table size to minimize collisions. ↩
-
Assuming that randomized promotions take place via a probability function. ↩
-
The load factor should be selected based on testing against real-world data. Absent testing, a load factor of 70-75% can be used as a rule-of-thumb. ↩
-
Where
s
is the length of the substring, andb
is the length of the base or source string. ↩ -
Where
k
is the number of digits (for numbers; more generally it's the length of whatever quantity you are sorting), andn
is the number of elements. ↩