Algorithm Types
Time Complexity
Graph Algorithms
NP Problems & Complexity
Sorting & Searching
100

This strategy divides the problem into subproblems, solves them recursively, and merges the results.

Divide and Conquer  

100

What is the time complexity of linear search?

O(n)

100

What algorithm would you use to find the shortest path in an unweighted graph?

Breadth-First Search 

100

What does NP stand for in computational complexity?

Nondeterministic Polynomial time

100

Which sorting algorithm is best for nearly sorted data?

Insertion Sort

200

Give one example where the greedy method fails to produce an optimal solution.

0/1 Knapsack Problem

200

Why is merge sort preferred over quicksort for linked lists?

Because merge sort does not require random access, making it more efficient on linked lists.

200

Why does Dijkstra’s algorithm fail on graphs with negative weights?

It assumes once a node is visited, the shortest path to it is known — this fails with negative weights.

200

Define the difference between NP-complete and NP-hard problems.

NP-complete: in NP and as hard as any in NP; NP-hard: not necessarily in NP.

200

Why is merge sort considered stable while quicksort is not?

Merge sort preserves the relative order of equal elements, quicksort doesn't guarantee it.

300

Why is memoization preferred over plain recursion in dynamic programming?

Because it avoids redundant subproblem evaluations, improving time complexity.

300

Derive the recurrence relation for merge sort  

T(n) = 2T(n/2) + O(n)

300

Explain the role of a priority queue in Dijkstra's algorithm.

It efficiently selects the next node with minimum distance.

300

Give a reduction from SAT to 3-SAT.

Split large clauses into 3-literal clauses by adding new variables (sketch acceptable).

300

Compare average vs. worst-case time complexity of quicksort.

Average: O(n log n), Worst: O(n²) if pivot is poorly chosen.

400

Describe a real-world scenario where backtracking is more efficient than brute-force.

Solving Sudoku or N-Queens, where constraints reduce the search space.

400

What does Big-O notation ignore and why? This is the best possible time complexity for comparison-based sorting algorithms.

It ignores constants and lower-order terms to focus on asymptotic growth. O(n log n)

400

In which scenario is Prim’s algorithm preferred over Kruskal’s for MST?

When the graph is dense and represented as an adjacency matrix.

400

Why is P ≠ NP still an unsolved problem?

Because no proof exists that P = NP or P ≠ NP — it's a millennium prize problem.

400

Write the recurrence for quicksort and solve using Master Theorem.

T(n) = T(k) + T(n-k-1) + O(n) → Average case: O(n log n)

500

Compare and contrast dynamic programming and greedy algorithms in terms of optimal substructure and decision making.

Both use optimal substructure, but greedy makes locally optimal decisions, while DP considers all possibilities and uses memoization to ensure a globally optimal solution. Backtracking

500

Given T(n) = T(n – 1) + O(1), derive the time complexity. This problem-solving strategy can sometimes reduce exponential complexity to polynomial.

It's a linear recurrence, so T(n) = O(n).

500

Prove that a tree with n vertices has exactly n-1 edges.

y induction: Base case holds; assume true for k, then for k+1 by adding a node and edge — still n-1.

500

Explain the significance of Cook’s theorem in complexity theory.

It showed that SAT is NP-complete, proving the existence of complete problems for NP.

500

How can radix sort achieve linear time complexity?

By sorting digits from LSB to MSB using a stable sort (like counting sort), total complexity: O(nk) where k is digit length.