Skip to Content
Live class going on, get early discount!

set-3

101. What type of sorting algorithm is the radix sort algorithm?

  1. Non-comparison based
  2. Comparison based
  3. Logarithmic
  4. Exponential
Show me the answer

Answer: 1. Non-comparison based

Explanation:

  • Radix Sort: Radix sort is a non-comparison-based sorting algorithm that sorts numbers by processing individual digits or characters.
  • Conclusion: Radix sort is a non-comparison-based sorting algorithm.

102. Which sorting algorithm is used to sort arrays in place?

  1. Merge sort
  2. Quick sort
  3. Insertion sort
  4. Bubble sort
Show me the answer

Answer: 2. Quick sort

Explanation:

  • Quick Sort: Quick sort is an in-place sorting algorithm, meaning it does not require additional memory proportional to the input size.
  • Conclusion: Quick sort is used to sort arrays in place.

103. What is the time complexity of the shell sort algorithm in the average case?

  1. O(n)O(n)
  2. O(nlogn)O(n \log n)
  3. O(n2)O(n^2)
  4. O(1)O(1)
Show me the answer

Answer: 2. O(nlogn)O(n \log n)

Explanation:

  • Shell Sort: Shell sort has an average-case time complexity of O(nlogn)O(n \log n), depending on the gap sequence used.
  • Conclusion: The average-case time complexity of shell sort is O(nlogn)O(n \log n).

104. The time complexity of Insertion Sort is:

  1. O(n)O(n)
  2. O(nlogn)O(n \log n)
  3. O(n2)O(n^2)
  4. O(logn)O(\log n)
Show me the answer

Answer: 3. O(n2)O(n^2)

Explanation:

  • Insertion Sort: Insertion sort has a worst-case and average-case time complexity of O(n2)O(n^2) because it performs n(n1)/2n(n-1)/2 comparisons in the worst case.
  • Conclusion: The time complexity of insertion sort is O(n2)O(n^2).

105. The time complexity of Merge Sort is:

  1. O(n)O(n)
  2. O(nlogn)O(n \log n)
  3. O(n2)O(n^2)
  4. O(logn)O(\log n)
Show me the answer

Answer: 2. O(nlogn)O(n \log n)

Explanation:

  • Merge Sort: Merge sort has a time complexity of O(nlogn)O(n \log n) in all cases because it divides the array into two halves and recursively sorts them.
  • Conclusion: The time complexity of merge sort is O(nlogn)O(n \log n).

106. The time complexity of Quick Sort is:

  1. O(n)O(n)
  2. O(nlogn)O(n \log n)
  3. O(n2)O(n^2)
  4. O(logn)O(\log n)
Show me the answer

Answer: 2. O(nlogn)O(n \log n)

Explanation:

  • Quick Sort: Quick sort has an average-case time complexity of O(nlogn)O(n \log n) because it divides the array into two subarrays and recursively sorts them.
  • Conclusion: The average-case time complexity of quick sort is O(nlogn)O(n \log n).

107. The time complexity of Heap Sort is:

  1. O(n)O(n)
  2. O(nlogn)O(n \log n)
  3. O(n2)O(n^2)
  4. O(logn)O(\log n)
Show me the answer

Answer: 2. O(nlogn)O(n \log n)

Explanation:

  • Heap Sort: Heap sort has a time complexity of O(nlogn)O(n \log n) in all cases because it builds a heap and repeatedly extracts the maximum element.
  • Conclusion: The time complexity of heap sort is O(nlogn)O(n \log n).

108. The time complexity of Bubble Sort is:

  1. O(n)O(n)
  2. O(nlogn)O(n \log n)
  3. O(n2)O(n^2)
  4. O(logn)O(\log n)
Show me the answer

Answer: 3. O(n2)O(n^2)

Explanation:

  • Bubble Sort: Bubble sort has a worst-case and average-case time complexity of O(n2)O(n^2) because it performs n(n1)/2n(n-1)/2 comparisons in the worst case.
  • Conclusion: The time complexity of bubble sort is O(n2)O(n^2).

109. Which sorting algorithm is used to sort linked lists?

  1. Insertion Sort
  2. Merge Sort
  3. Quick Sort
  4. Heap Sort
Show me the answer

Answer: 2. Merge Sort

Explanation:

  • Merge Sort for Linked Lists: Merge sort is well-suited for sorting linked lists because it does not require random access to elements and can efficiently divide and merge the list.
  • Conclusion: Merge sort is commonly used to sort linked lists.

110. Which sorting algorithm is used for sorting large data sets?

  1. Insertion Sort
  2. Merge Sort
  3. Quick Sort
  4. Heap Sort
Show me the answer

Answer: 2. Merge Sort

Explanation:

  • Merge Sort: Merge sort is efficient for large data sets because it has a time complexity of O(nlogn)O(n \log n) and is stable.
  • Conclusion: Merge sort is used for sorting large data sets.

111. Which sorting algorithm is used for sorting arrays with random order?

  1. Insertion Sort
  2. Merge Sort
  3. Quick Sort
  4. Heap Sort
Show me the answer

Answer: 3. Quick Sort

Explanation:

  • Quick Sort: Quick sort is efficient for sorting arrays with random order because it has an average-case time complexity of O(nlogn)O(n \log n) and works well in practice.
  • Conclusion: Quick sort is used for sorting arrays with random order.

112. Which sorting algorithm is used for sorting arrays with sorted or nearly sorted data?

  1. Insertion Sort
  2. Merge Sort
  3. Quick Sort
  4. Heap Sort
Show me the answer

Answer: 1. Insertion Sort

Explanation:

  • Insertion Sort: Insertion sort is efficient for sorting arrays with sorted or nearly sorted data because it has a best-case time complexity of O(n)O(n).
  • Conclusion: Insertion sort is used for sorting arrays with sorted or nearly sorted data.

113. Which sorting algorithm is used for sorting arrays with a large number of elements?

  1. Insertion Sort
  2. Merge Sort
  3. Quick Sort
  4. Heap Sort
Show me the answer

Answer: 4. Heap Sort

Explanation:

  • Heap Sort: Heap sort is efficient for sorting arrays with a large number of elements because it has a time complexity of O(nlogn)O(n \log n) and does not require additional memory.
  • Conclusion: Heap sort is used for sorting arrays with a large number of elements.

114. Which of the following sorting algorithms is considered to be the fastest for small data sets?

  1. Bubble sort
  2. Insertion sort
  3. Quick sort
  4. Merge sort
Show me the answer

Answer: 2. Insertion sort

Explanation:

  • Insertion Sort: Insertion sort is efficient for small data sets because it has a low overhead and performs well when the number of elements is small.
  • Conclusion: Insertion sort is the fastest sorting algorithm for small data sets.

115. Which of the following sorting algorithms is a divide-and-conquer algorithm?

  1. Bubble sort
  2. Insertion sort
  3. Quick sort
  4. Merge sort
Show me the answer

Answer: 4. Merge sort

Explanation:

  • Merge Sort: Merge sort is a divide-and-conquer algorithm that divides the array into two halves, recursively sorts them, and then merges the sorted halves.
  • Conclusion: Merge sort is a divide-and-conquer algorithm.

116. Which of the following sorting algorithms has the best average case time complexity?

  1. Bubble sort
  2. Insertion sort
  3. Quick sort
  4. Merge sort
Show me the answer

Answer: 3. Quick sort

Explanation:

  • Quick Sort: Quick sort has an average-case time complexity of O(nlogn)O(n \log n), which is the best among the given options.
  • Conclusion: Quick sort has the best average-case time complexity.

117. Which of the following sorting algorithms has a worst case time complexity of O(n2)O(n^2)?

  1. Bubble sort
  2. Insertion sort
  3. Quick sort
  4. Merge sort
Show me the answer

Answer: 1. Bubble sort

Explanation:

  • Bubble Sort: Bubble sort has a worst-case time complexity of O(n2)O(n^2) because it performs n(n1)/2n(n-1)/2 comparisons in the worst case.
  • Conclusion: Bubble sort has a worst-case time complexity of O(n2)O(n^2).

118. Which of the following sorting algorithms is an in-place sorting algorithm?

  1. Bubble sort
  2. Insertion sort
  3. Quick sort
  4. Merge sort
Show me the answer

Answer: 3. Quick sort

Explanation:

  • Quick Sort: Quick sort is an in-place sorting algorithm, meaning it does not require additional memory proportional to the input size.
  • Conclusion: Quick sort is an in-place sorting algorithm.

119. Which of the following sorting algorithms has the best space complexity?

  1. Bubble sort
  2. Insertion sort
  3. Quick sort
  4. Merge sort
Show me the answer

Answer: 2. Insertion sort

Explanation:

  • Insertion Sort: Insertion sort has a space complexity of O(1)O(1) because it sorts the array in place without requiring additional memory.
  • Conclusion: Insertion sort has the best space complexity.

120. Which of the following sorting algorithms is suitable for sorting linked lists?

  1. Bubble sort
  2. Insertion sort
  3. Quick sort
  4. Merge sort
Show me the answer

Answer: 4. Merge sort

Explanation:

  • Merge Sort: Merge sort is well-suited for sorting linked lists because it does not require random access to elements and can efficiently divide and merge the list.
  • Conclusion: Merge sort is suitable for sorting linked lists.

121. Which of the following sorting algorithms is used for sorting data in a specialized manner like sorting data in a particular range or sorting data by a specific field?

  1. Bubble sort
  2. Insertion sort
  3. Quick sort
  4. Radix sort
Show me the answer

Answer: 4. Radix sort

Explanation:

  • Radix Sort: Radix sort is used for sorting data in a specialized manner, such as sorting numbers by their digits or sorting strings lexicographically.
  • Conclusion: Radix sort is used for sorting data in a specialized manner.
  1. O(n)O(n)
  2. O(logn)O(\log n)
  3. O(1)O(1)
  4. O(nlogn)O(n \log n)
Show me the answer

Answer: 1. O(n)O(n)

Explanation:

  • Linear Search: Linear search has a time complexity of O(n)O(n) because it checks each element in the list sequentially until the target is found.
  • Conclusion: The time complexity of linear search is O(n)O(n).
  1. O(n)O(n)
  2. O(logn)O(\log n)
  3. O(1)O(1)
  4. O(nlogn)O(n \log n)
Show me the answer

Answer: 2. O(logn)O(\log n)

Explanation:

  • Binary Search: Binary search has a time complexity of O(logn)O(\log n) because it repeatedly divides the search interval in half.
  • Conclusion: The time complexity of binary search is O(logn)O(\log n).
  1. More efficient in large data sets
  2. Can be used only on sorted data sets
  3. Can only be used on unsorted data sets
  4. Not as efficient as linear search in small data sets
Show me the answer

Answer: 1. More efficient in large data sets

Explanation:

  • Binary Search Advantage: Binary search is more efficient than linear search in large data sets because it has a time complexity of O(logn)O(\log n) compared to O(n)O(n) for linear search.
  • Conclusion: The main advantage of binary search is its efficiency in large data sets.
  1. To search for an element by comparing it with every element in the data set
  2. To search for an element by dividing the data set into two parts and comparing the target element with the middle element
  3. To search for an element by comparing it with the first and last elements of the data set
  4. To search for an element by comparing it with the last element of the data set
Show me the answer

Answer: 1. To search for an element by comparing it with every element in the data set

Explanation:

  • Linear Search: Linear search works by comparing the target element with each element in the data set sequentially until a match is found.
  • Conclusion: The basic idea behind linear search is to compare the target element with every element in the data set.
  1. To search for an element by comparing it with every element in the data set
  2. To search for an element by dividing the data set into two parts and comparing the target element with the middle element
  3. To search for an element by comparing it with the first and last elements of the data set
  4. To search for an element by comparing it with the last element of the data set
Show me the answer

Answer: 2. To search for an element by dividing the data set into two parts and comparing the target element with the middle element

Explanation:

  • Binary Search: Binary search works by repeatedly dividing the data set into two halves and comparing the target element with the middle element to determine which half to search next.
  • Conclusion: The basic idea behind binary search is to divide the data set into two parts and compare the target element with the middle element.
  1. O(n)O(n)
  2. O(logn)O(\log n)
  3. O(1)O(1)
  4. O(nlogn)O(n \log n)
Show me the answer

Answer: 3. O(1)O(1)

Explanation:

  • Best Case of Binary Search: In the best case, the target element is found at the middle of the data set in the first comparison, resulting in a time complexity of O(1)O(1).
  • Conclusion: The best-case time complexity of binary search is O(1)O(1).
  1. O(n)O(n)
  2. O(logn)O(\log n)
  3. O(1)O(1)
  4. O(nlogn)O(n \log n)
Show me the answer

Answer: 2. O(logn)O(\log n)

Explanation:

  • Worst Case of Binary Search: In the worst case, the target element is found after dividing the data set into halves repeatedly, resulting in a time complexity of O(logn)O(\log n).
  • Conclusion: The worst-case time complexity of binary search is O(logn)O(\log n).
  1. Unsorted data set
  2. Sorted data set
  3. Randomly ordered data set
  4. Any type of data set
Show me the answer

Answer: 2. Sorted data set

Explanation:

  • Binary Search Requirement: Binary search requires the data set to be sorted because it relies on the ability to divide the data set into two halves based on the middle element.
  • Conclusion: A sorted data set is required to perform binary search.
  1. More efficient in large data sets
  2. Can be used only on sorted data sets
  3. Not as efficient as binary search in large data sets
  4. Can only be used on unsorted data sets
Show me the answer

Answer: 3. Not as efficient as binary search in large data sets

Explanation:

  • Linear Search Disadvantage: Linear search is less efficient than binary search in large data sets because it has a time complexity of O(n)O(n) compared to O(logn)O(\log n) for binary search.
  • Conclusion: The main disadvantage of linear search is its inefficiency in large data sets.
  1. Linear search
  2. Binary search
  3. Both are equally fast
  4. It depends on the size of the data set
Show me the answer

Answer: 2. Binary search

Explanation:

  • Binary Search: Binary search is faster than linear search because it has a time complexity of O(logn)O(\log n) compared to O(n)O(n) for linear search.
  • Conclusion: Binary search is faster than linear search.

132. What is the purpose of a hash function in a hash table?

  1. To convert a key into an index for the table
  2. To sort the elements in the table
  3. To compare elements in the table
  4. To store the elements in the table
Show me the answer

Answer: 1. To convert a key into an index for the table

Explanation:

  • Hash Function: A hash function maps a key to an index in the hash table, allowing for efficient storage and retrieval of data.
  • Conclusion: The purpose of a hash function is to convert a key into an index for the table.

133. What is a hash table in computer science?

  1. A data structure that stores elements in a sorted manner
  2. A data structure that stores elements in a random order
  3. A data structure that stores elements using a key-value pair
  4. A data structure that stores elements using an array
Show me the answer

Answer: 3. A data structure that stores elements using a key-value pair

Explanation:

  • Hash Table: A hash table is a data structure that stores elements as key-value pairs, allowing for efficient insertion, deletion, and retrieval of data.
  • Conclusion: A hash table stores elements using a key-value pair.

134. What is collision resolution in hash tables?

  1. The process of adding elements to a hash table
  2. The process of removing elements from a hash table
  3. The process of resolving conflicts when two keys hash to the same index
  4. The process of searching for a specific element in a hash table
Show me the answer

Answer: 3. The process of resolving conflicts when two keys hash to the same index

Explanation:

  • Collision Resolution: Collision resolution is the process of handling situations where two different keys hash to the same index in the hash table.
  • Conclusion: Collision resolution is the process of resolving conflicts when two keys hash to the same index.

135. What are some common collision resolution techniques in hash tables?

  1. Linear probing
  2. Binary search
  3. Chaining
  4. All of the above
Show me the answer

Answer: 4. All of the above

Explanation:

  • Collision Resolution Techniques: Common techniques include:
    1. Linear Probing: Searching for the next available slot in the hash table.
    2. Chaining: Storing multiple elements in the same index using a linked list.
  • Conclusion: All of the above are common collision resolution techniques.

136. What is the time complexity of searching in a hash table using chaining?

  1. O(1)O(1)
  2. O(logn)O(\log n)
  3. O(n)O(n)
  4. O(nlogn)O(n \log n)
Show me the answer

Answer: 3. O(n)O(n)

Explanation:

  • Hash Table with Chaining: In a hash table that uses chaining for collision resolution, the worst-case time complexity for searching is O(n)O(n). This occurs when all elements hash to the same bucket, creating a linked list of length nn.
  • Conclusion: The worst-case time complexity of searching in a hash table using chaining is O(n)O(n).

137. What is the main purpose of a hash function in a hash table?

  1. To sort the elements in the table
  2. To find the index of an element in the table
  3. To compare the elements in the table
  4. To store the elements in the table
Show me the answer

Answer: 2. To find the index of an element in the table

Explanation:

  • Hash Function: The primary purpose of a hash function is to map a key to an index in the hash table. This allows for efficient storage and retrieval of elements.
  • Conclusion: The main purpose of a hash function is to determine the index of an element in the hash table.

138. What is the most common way to resolve collisions in a hash table?

  1. Chaining
  2. Open Addressing
  3. Binary Search
  4. Linear Search
Show me the answer

Answer: 1. Chaining

Explanation:

  • Collision Resolution: Chaining is the most common method for resolving collisions in a hash table. It involves storing multiple elements in the same bucket using a linked list or another data structure.
  • Conclusion: Chaining is the most widely used method for collision resolution in hash tables.

139. What is the time complexity of searching an element in a hash table using the hash function?

  1. O(n)O(n)
  2. O(logn)O(\log n)
  3. O(1)O(1)
  4. O(nlogn)O(n \log n)
Show me the answer

Answer: 3. O(1)O(1)

Explanation:

  • Hash Table Search: In the average case, searching for an element in a hash table using the hash function takes O(1)O(1) time because the hash function directly computes the index of the element.
  • Conclusion: The average-case time complexity of searching an element in a hash table is O(1)O(1).

140. What is the main advantage of using a hash table over a traditional array for storing elements?

  1. Faster access time
  2. Better sorting
  3. More memory efficient
  4. Better searching
Show me the answer

Answer: 1. Faster access time

Explanation:

  • Hash Table Advantage: The main advantage of a hash table is that it provides faster access time (O(1)O(1) on average) compared to traditional arrays, which require O(n)O(n) time for searching.
  • Conclusion: Hash tables offer faster access time, making them more efficient for storing and retrieving elements.

141. What is the purpose of a collision resolution technique in a hash table?

  1. To ensure that each element is stored at a unique index in the table
  2. To sort the elements in the table
  3. To compare the elements in the table
  4. To store the elements in the table
Show me the answer

Answer: 1. To ensure that each element is stored at a unique index in the table

Explanation:

  • Collision Resolution: The purpose of collision resolution techniques (e.g., chaining, open addressing) is to handle cases where multiple elements hash to the same index, ensuring that each element is stored and retrieved correctly.
  • Conclusion: Collision resolution techniques ensure that each element is stored at a unique index in the hash table.

142. What is the main objective of a minimum spanning tree algorithm?

  1. To find the shortest path between two vertices
  2. To find the minimum weight of a tree that connects all vertices
  3. To find the longest path between two vertices
  4. To find the maximum weight of a tree that connects all vertices
Show me the answer

Answer: 2. To find the minimum weight of a tree that connects all vertices

Explanation:

  • Minimum Spanning Tree (MST): The main objective of an MST algorithm is to find a tree that connects all vertices in a graph with the minimum possible total edge weight.
  • Conclusion: The goal of an MST algorithm is to find the minimum weight of a tree that connects all vertices.

143. What is the time complexity of Prim’s algorithm for finding a minimum spanning tree?

  1. O(n2)O(n^2)
  2. O(nlogn)O(n \log n)
  3. O(n)O(n)
  4. O(logn)O(\log n)
Show me the answer

Answer: 1. O(n2)O(n^2)

Explanation:

  • Prim’s Algorithm: The time complexity of Prim’s algorithm is O(n2)O(n^2) when using an adjacency matrix and a simple implementation. However, with a priority queue, it can be improved to O(nlogn)O(n \log n).
  • Conclusion: The time complexity of Prim’s algorithm is O(n2)O(n^2) in its basic form.

144. What is the time complexity of Kruskal’s algorithm for finding a minimum spanning tree?

  1. O(n2)O(n^2)
  2. O(nlogn)O(n \log n)
  3. O(n)O(n)
  4. O(logn)O(\log n)
Show me the answer

Answer: 2. O(nlogn)O(n \log n)

Explanation:

  • Kruskal’s Algorithm: The time complexity of Kruskal’s algorithm is O(nlogn)O(n \log n) due to the sorting of edges and the use of a disjoint-set data structure for cycle detection.
  • Conclusion: The time complexity of Kruskal’s algorithm is O(nlogn)O(n \log n).

145. What is the difference between Prim’s and Kruskal’s algorithm for finding a minimum spanning tree?

  1. Prim’s algorithm starts with a single vertex and adds vertices to the tree while Kruskal’s algorithm starts with all vertices and adds edges to the tree.
  2. Prim’s algorithm starts with all vertices and adds edges to the tree while Kruskal’s algorithm starts with a single vertex and adds vertices to the tree.
  3. Both algorithms start with a single vertex and add edges to the tree.
  4. Both algorithms start with all vertices and add vertices to the tree.
Show me the answer

Answer: 1. Prim’s algorithm starts with a single vertex and adds vertices to the tree while Kruskal’s algorithm starts with all vertices and adds edges to the tree.

Explanation:

  • Prim’s Algorithm: Prim’s algorithm starts with a single vertex and grows the MST by adding the minimum-weight edge that connects a vertex in the tree to a vertex outside the tree.
  • Kruskal’s Algorithm: Kruskal’s algorithm starts with all vertices and grows the MST by adding the minimum-weight edge that does not form a cycle.
  • Conclusion: The key difference is that Prim’s algorithm builds the tree vertex by vertex, while Kruskal’s algorithm builds it edge by edge.

146. What is the Round-Robin algorithm used for?

  1. To find the minimum spanning tree
  2. To schedule processes in a computer system
  3. To sort a list of elements
  4. To find the longest path between two vertices
Show me the answer

Answer: 2. To schedule processes in a computer system

Explanation:

  • Round-Robin Algorithm: The Round-Robin algorithm is a CPU scheduling algorithm that assigns a fixed time slice to each process in a cyclic order.
  • Conclusion: The Round-Robin algorithm is used for scheduling processes in a computer system.

147. What is the main advantage of using a minimum spanning tree algorithm?

  1. To find the shortest path between two vertices
  2. To find the minimum weight of a tree that connects all vertices
  3. To find the longest path between two vertices
  4. To find the maximum weight of a tree that connects all vertices
Show me the answer

Answer: 2. To find the minimum weight of a tree that connects all vertices

Explanation:

  • Minimum Spanning Tree (MST): The main advantage of an MST algorithm is that it finds the minimum weight of a tree that connects all vertices in a graph, which is useful in network design and optimization.
  • Conclusion: The primary advantage of an MST algorithm is to find the minimum weight of a tree that connects all vertices.

148. What is the use of a Round-Robin algorithm in a computer system?

  1. To find the minimum spanning tree
  2. To schedule processes in a computer system
  3. To sort a list of elements
  4. To find the longest path between two vertices
Show me the answer

Answer: 2. To schedule processes in a computer system

Explanation:

  • Round-Robin Algorithm: The Round-Robin algorithm is used in operating systems to schedule processes fairly by giving each process a fixed time slice.
  • Conclusion: The Round-Robin algorithm is used for process scheduling in a computer system.

149. What is the main disadvantage of using a Kruskal’s algorithm compared to Prim’s algorithm for finding a minimum spanning tree?

  1. Kruskal’s algorithm is less efficient than Prim’s algorithm
  2. Kruskal’s algorithm is more difficult to implement than Prim’s algorithm
  3. Kruskal’s algorithm is less reliable than Prim’s algorithm
  4. Kruskal’s algorithm is less accurate than Prim’s algorithm
Show me the answer

Answer: 1. Kruskal’s algorithm is less efficient than Prim’s algorithm

Explanation:

  • Kruskal’s vs Prim’s: Kruskal’s algorithm is generally less efficient than Prim’s algorithm for dense graphs because it requires sorting all edges, which takes O(nlogn)O(n \log n) time.
  • Conclusion: The main disadvantage of Kruskal’s algorithm is that it is less efficient than Prim’s algorithm for dense graphs.

150. What is the use of a hash function in a hash table data structure?

  1. To generate a unique key for each element in the table
  2. To sort the elements in the table
  3. To find the shortest path between two vertices
  4. To find the longest path between two vertices
Show me the answer

Answer: 1. To generate a unique key for each element in the table

Explanation:

  • Hash Function: The primary use of a hash function is to map keys to indices in the hash table, ensuring efficient storage and retrieval of elements.
  • Conclusion: The hash function generates a unique key for each element in the table.
Last updated on