WGU Foundations of Computer Science Questions and Answers
Which method allows a user to convert a string value to all capital letters in Python?
Options:
toUpperCase()
makeUpper()
upper()
upperCase()
Answer:
CExplanation:
In Python, strings are objects of type str, and the language provides many built-in string methods for common transformations. The standard method used to convert all alphabetic characters in a string to uppercase is upper(). For example, "Hello, World".upper() produces "HELLO, WORLD". This method is part of Python’s core string API and is documented as returning anewstring because strings are immutable in Python; the original string is not modified.
Options A and D resemble methods from other programming languages. For instance, toUpperCase() is commonly seen in Java and JavaScript, not Python. Option B, makeUpper(), is not a standard method in Python’s str type. Python’s naming conventions for built-in methods are typically short and lowercase, which is consistent with upper(), lower(), strip(), and replace().
It is also important to note what upper() does and does not do. It affects letters according to Unicode case-mapping rules, so it works beyond ASCII and supports many languages. Non-alphabetic characters such as digits, punctuation, and whitespace remain unchanged. Because the method returns a new string, it supports functional-style programming and safe reuse of the original data. In many textbook examples, upper() is paired with input normalization tasks, such as case-insensitive comparisons and cleaning user-entered text.
Which sorting algorithm works by finding the smallest or largest element in an unsorted part of a list and moving it to the sorted part of the list?
Options:
Radix sort
Heap sort
Quicksort
Selection sort
Answer:
DExplanation:
Selection sort is defined by a simple repeated strategy: divide the list into a sorted region and an unsorted region, then repeatedly select the smallest (or largest) element from the unsorted region and move it to the end of the sorted region. In the common “smallest-first” version, the algorithm scans the unsorted portion to find the minimum element, then swaps it into the next position in the sorted portion. After the first pass, the smallest element is fixed at index 0; after the second pass, the second-smallest is fixed at index 1; and so on until the entire list is sorted.
This exactly matches the description in the question, making selection sort the correct answer. Textbooks often use selection sort to teach algorithmic thinking because it is easy to understand and implement, though not efficient for large datasets. Its time complexity is O(n²) in the average and worst case because it performs roughly n scans of progressively smaller unsorted sections, with each scan taking linear time. Its space usage is O(1) additional space because it sorts in place using swaps.
The other options do not match the described mechanism. Quicksort partitions around a pivot, heap sort uses a heap data structure to repeatedly extract the maximum/minimum, and radix sort processes digits/keys by place value rather than selecting minima by scanning. Selection sort’s defining action is the repeated “select the min/max and place it.”
Which brand of Type 1 hypervisor is commonly used to create virtual machines?
Options:
VMware ESXi
Parallels Desktop
VirtualBox
VMware Workstation
Answer:
AExplanation:
AType 1 hypervisor, also called abare-metal hypervisor, runs directly on the host machine’s hardware rather than on top of a general-purpose operating system. This design is widely described in virtualization textbooks because it improves performance and isolation: the hypervisor controls CPU scheduling, memory management, and I/O virtualization with minimal overhead from an intermediate OS layer. Type 1 hypervisors are therefore common in servers and data centers.
Among the options,VMware ESXiis the well-known Type 1 hypervisor product. It is installed directly onto physical server hardware and provides the virtualization layer used to run multiple virtual machines. In contrast, Parallels Desktop, VirtualBox, and VMware Workstation are typically categorized asType 2 hypervisors, meaning they run as applications on top of a host operating system like Windows, macOS, or Linux. Type 2 hypervisors are excellent for desktops, development, testing, and learning, but they generally rely on the host OS for device drivers and resource management, which can add overhead.
This distinction matters in practice: data centers favor Type 1 hypervisors for efficiency, centralized management, and robust isolation between workloads. Desktop users often choose Type 2 hypervisors for convenience and easier installation. Therefore, the commonly used Type 1 hypervisor brand listed here is VMware ESXi.
Which statement describes the data type restriction found in most NumPy arrays?
Options:
NumPy arrays are restricted to string data types only.
NumPy arrays must be of the same type of data.
NumPy arrays can only hold integer data types.
NumPy arrays adapt to the most complex data type on the fly.
Answer:
BExplanation:
Most NumPy arrays enforce a key constraint: all elements share the samedtype(data type). This uniform typing is foundational to NumPy’s performance model. Because each element has the same size and representation, NumPy can store the array in a contiguous memory block and apply low-level, vectorized operations efficiently. This is why NumPy is widely used for numerical computing, statistics, and data analysis: operations like addition, multiplication, and reductions (sum/mean) can be implemented in optimized compiled code without per-element Python overhead.
Option B captures this textbook principle: elements in a typical ndarray are of the same data type. The other options are incorrect. NumPy is not restricted to strings (A), and it is not limited to integers (C); it supports floats, complex numbers, booleans, fixed-width strings, datetime types, and many others. Option D is misleading: NumPy does not continuously “adapt on the fly” during normal use. The dtype is generally fixed once the array exists. What NumPydoesdo is choose an appropriate common dtype when you create an array from mixed inputs (for example, mixing ints and floats yields floats). But after creation, assignments are cast into the existing dtype rather than dynamically changing the dtype to accommodate new values.
This restriction is precisely what differentiates NumPy arrays from Python lists and enables predictable memory layout and fast numerical computation.
Which order is impossible when traversing a binary tree using depth first search?
Options:
Level-order traversal
Pre-order traversal
Post-order traversal
In-order traversal
Answer:
AExplanation:
Depth-first search (DFS) explores a tree by going as deep as possible along a branch before backtracking. In binary trees, DFS gives rise to the classic traversal orderspre-order,in-order, andpost-order, each defined by when you “visit” the node relative to its left and right subtrees. Pre-order visits the node first, then left subtree, then right subtree. In-order visits left subtree, then the node, then right subtree. Post-order visits left subtree, then right subtree, then the node. These are all DFS-based because they fully explore subtrees before moving sideways to another branch.
Level-order traversalis different: it visits nodes layer by layer from the root outward (all nodes at depth 0, then depth 1, then depth 2, etc.). This is a hallmark ofbreadth-first search (BFS), not DFS. Textbooks emphasize this distinction because DFS and BFS have different properties: BFS naturally finds shortest paths in unweighted graphs and produces level-order traversal in trees, while DFS is useful for tasks like topological sorting, cycle detection, and exploring structure recursively.
Therefore, the traversal order that is impossible to produce as a depth-first traversal of a binary tree is level-order traversal. The DFS orders (pre-, in-, post-) are all achievable by depth-first strategies, typically implemented recursively or with an explicit stack.
Which Windows 11 tool enables a user to manually add a Bluetooth device if it does not automatically configure when first connected?
Options:
Network center
Task scheduler
Windows defender
Device manager
Answer:
DExplanation:
When a Bluetooth device does not configure automatically, the underlying issue is often driver discovery, device enumeration, or the Bluetooth adapter’s state. In Windows, the tool traditionally associated with manually managing hardware devices and their drivers isDevice Manager. It lets a user view hardware categories (including Bluetooth adapters), enable or disable devices, update drivers, uninstall and rescan, and address “unknown device” situations. These actions are core to manual configuration because they influence whether Windows can properly recognize and communicate with a Bluetooth device.
Windows 11 pairing itself is typically initiated from the Settings app under Bluetooth and devices, where a user chooses “Add device” to pair a new accessory. (Microsoft Support) However, among the options provided, only Device Manager is a hardware-configuration tool that can resolve situations where automatic configuration fails due to driver or adapter problems. Network-related tools do not handle local device drivers, Task Scheduler automates tasks rather than adding devices, and Windows Defender is focused on security and malware protection rather than device setup.
From a systems perspective, this reflects a key operating-systems concept: successful device use requires both discovery/pairing and a correctly installed driver stack. Device Manager is the standard interface for the driver and device side of that equation, which is why it is the best match to “manually add or configure” hardware in the given choices.
How is the NumPy package imported into a Python session?
Options:
import num_py
import numpy as np
using numpy
include numpy
Answer:
BExplanation:
In Python, external libraries are brought into a program using the import statement. NumPy, which provides the ndarray type and a large collection of numerical computing functions, is conventionally imported with an alias for convenience. The standard and widely taught pattern is import numpy as np. This imports the numpy module and binds it to the shorter name np, making code more readable and reducing repeated typing, especially in mathematical expressions such as np.array(...), np.mean(...), or np.dot(...).
Option A is incorrect because the module name is numpy, not num_py. Options C and D resemble syntax from other languages (for example, “using” in C# or “include” in C/C++), but they are not valid Python import mechanisms. Python’s module system is based on imports, and the aliasing feature (as np) is built into the import statement.
Textbooks also emphasize that importing a package requires that it be installed in the active Python environment. If NumPy is not installed, import numpy as np will raise an ImportError (or ModuleNotFoundError in modern Python). Once imported, the alias np is used consistently in scientific computing materials, notebooks, and professional data analysis codebases, which is why this option is considered the correct and expected answer.
Which aspect is excluded from a NumPy array’s structure?
Options:
The data pointer
The shape of the array
The data type or dtype pointer
The encryption key of the array
Answer:
DExplanation:
A NumPy ndarray is designed for efficient numerical computing, and its structure is defined by metadata required to interpret a contiguous (or strided) block of memory as an n-dimensional array. Textbooks and NumPy’s own conceptual model describe key components such as: adata buffer(where the raw bytes live), adata pointer(reference to the start of that buffer), thedtype(which specifies how to interpret each element’s bytes—e.g., int32, float64), theshape(the size in each dimension), andstrides(how many bytes to step in memory to move along each dimension). Together, these allow fast indexing, slicing, and vectorized operations without Python-level loops.
Options A, B, and C are all part of what an array must track to function correctly: the array must know where its data is, how it is laid out (shape/strides), and how to interpret bytes (dtype). In contrast, anencryption keyis not a concept that belongs to the internal representation of a numerical array. Encryption is a security mechanism applied at storage or transport layers (for example, encrypting a file on disk or encrypting data sent over a network), not something built into the in-memory structure of a NumPy array object.
Therefore, the aspect excluded from a NumPy array’s structure is the encryption key.
How is a NumPy array named data with 6 elements reshaped into a 2x3 array?
Options:
np.reshape(data, (2, 3))
np_reshape(list, (2, 3))
data.set_shape(2, 3)
data_reshape[2, 3]
Answer:
AExplanation:
Reshaping is the operation of changing the “view” of an array so that the same elements are arranged with new dimensions. In NumPy, reshaping is possible when the total number of elements stays the same. A 2x3 array contains 6 elements, so a 1D array data of length 6 can be reshaped into shape (2, 3) without adding or removing values. Textbooks stress this invariant: the product of the dimensions must equal the original size.
NumPy provides two standard reshaping interfaces: the function np.reshape(data, (2, 3)) and the method data.reshape(2, 3) (or data.reshape((2, 3))). Option A is correct because it uses the official NumPy function with the proper arguments: the original array and the target shape. The shape is passed as a tuple describing rows and columns.
Option B is incorrect because np_reshape is not the correct NumPy function name, and it references an unrelated identifier list. Option C is incorrect because NumPy arrays do not provide a set_shape method like that. Option D is not valid NumPy syntax for reshaping.
Reshaping is fundamental in data analysis and machine learning: it converts flat vectors into matrices, prepares batches of samples, and aligns dimensions for matrix multiplication and broadcasting.
Which action is taken if the first number is the lowest value in a selection sort?
Options:
The first number is increased by one.
The first number is duplicated.
It swaps the selected element with the last unsorted element.
It swaps the selected element with the first unsorted element.
Answer:
DExplanation:
Selection sort works by maintaining a boundary between a sorted prefix and an unsorted suffix. On each pass, the algorithm finds the smallest value in the unsorted portion and places it into the first position of that unsorted portion (which is also the next position in the sorted prefix). This is usually done by swapping the element at the minimum’s index with the element at the boundary index (the “first unsorted element”). That description matches option D.
If the first element of the unsorted portion is already the smallest, then the minimum’s index equals the boundary index. In textbook implementations, the algorithm may still execute a swap operation, but it becomes a swap of an element with itself (a no-op), leaving the array unchanged. Many implementations include a small optimization: perform the swap only if the minimum index differs from the boundary index. Either way, conceptually the “action taken” by selection sort is still “swap the selected minimum into the first unsorted position,” which is exactly what option D states.
Options A and B are unrelated to sorting; selection sort never increases or duplicates values. Option C is incorrect because selection sort swaps the minimum with thefirstunsorted element, not the last. After the swap (or no-op), the sorted region grows by one element, and the algorithm repeats from the next boundary position.
This logic is fundamental for understanding how selection sort ensures correctness: after pass i, the smallest i+1 elements are fixed in their final positions.
What is the likely cause if a default Python configuration does not recognize a NumPy array as an allowed data structure?
Options:
The NumPy package is not present.
The array module is not imported.
The Python interpreter is misconfigured.
The Python version is outdated.
Answer:
AExplanation:
NumPy arrays are not a built-in Python data structure. In a default Python installation, the interpreter includes core types such as int, float, str, list, tuple, dict, and set, plus the standard library. A NumPy array, typically created as numpy.ndarray, is provided by the third-party NumPy library. Therefore, if a “default Python configuration” does not recognize a NumPy array, the most likely cause is thatNumPy is not installed or not available in the active environment. This happens often when a user has multiple Python environments (system Python, virtual environments, conda environments) and installs NumPy into one environment while running code in another.
Option B is incorrect because Python’s standard-library array module is different from NumPy. Importing array does not create or enable NumPy’s ndarray type. Option C is possible in rare cases,but the typical, textbook-aligned explanation is missing dependencies rather than an incorrectly configured interpreter. Option D is also unlikely: while very old Python versions may cause compatibility issues with modern NumPy releases, the symptom described—NumPy arrays not being recognized at all—more directly indicates the package is absent in the running environment.
In practice, verifying import numpy and checking the installed packages for the current interpreter resolves the issue.
What will the expression fam[3:6] return?
Options:
A list with elements at index 4, 5, and 6
A list with elements at index 3, 4, 5, and 6
A list with elements at index 3, 4, and 5
A list with elements at index 6
Answer:
CExplanation:
Python slicing follows the rule `sequence[start:stop]`, where the `start` index is **inclusive** and the `stop` index is **exclusive**. This convention is taught widely because it makes many algorithms and boundary cases simpler: the length of the slice is `stop - start` (when step is 1), and adjacent slices can partition a sequence without overlap. For a list named `fam`, the slice `fam[3:6]` starts at index 3 and includes the elements at indices 3, 4, and 5, but it stops before index 6.
This is a frequent source of off-by-one errors for beginners, so textbooks emphasize remembering: “start is included, stop is not.” If `fam` had at least 6 elements, then `fam[3:6]` would produce a new list of exactly three elements (positions 3, 4, 5). If `fam` had fewer than 6 elements, Python would still return a valid slice up to the end without raising an error, because slicing is designed to be safe within bounds.
# Option A is incorrect because it skips index 3 and incorrectly includes index 6. Option B is incorrect because it includes index 6, which the stop boundary excludes. Option D is incorrect because slicing returns a sublist, not a single element; a single element would require indexing like `fam[6]`.
Which protocol provides encryption while email messages are in transit?
Options:
FTP
HTTP
TLS
IMAP
Answer:
CExplanation:
“Encryption in transit” means protecting data while it moves across a network so that eavesdroppers cannot read or modify it. For email systems, this protection is most commonly provided byTLS (Transport Layer Security). TLS is a cryptographic protocol that can wrap application protocols (including mail protocols) to provide confidentiality, integrity, and server (and sometimes client) authentication. In practice, TLS is used to secure connections such as SMTP submission (often with STARTTLS or implicit TLS), IMAP over TLS, and POP3 over TLS. Textbooks present TLS as the standard successor to SSL and the foundation of secure communication on the modern Internet.
The other options are not correct in this context. FTP is a file transfer protocol and is traditionally unencrypted unless paired with additional security mechanisms (e.g., FTPS, which uses TLS, or SFTP, which uses SSH). HTTP is a web protocol; it becomes encrypted only when used as HTTPS, which again relies on TLS underneath. IMAP is an email retrieval protocol, butIMAP itself is not the encryption protocol—IMAP can be run over TLS (IMAPS) to become secure.
Therefore, the protocol that provides encryption while email messages (or email protocol traffic) are in transit is TLS.
Which statement describes the relationship between trees and graphs?
Options:
Trees do not have levels.
Trees can have cycles.
Trees can have unconnected nodes.
Trees cannot have cycles.
Answer:
DExplanation:
In discrete mathematics and computer science, atreeis a special kind ofgraph. The standard graph-theory definition is that a tree is aconnected, acyclicundirected graph. “Acyclic” means it containsno cycles, i.e., you cannot start at a vertex, follow a sequence of edges, and return to the starting vertex without repeating edges in a way that forms a loop. (Wikipedia) This property is exactly what makes option D correct.
The other options contradict the definition. If a structure has cycles, it is not a tree (though it may still be a graph). If it has unconnected nodes, it is not connected; such a structure is more like aforest(a disjoint union of trees) rather than a single tree. (Wikipedia) The idea of “levels” belongs to a particular computer-science representation called arooted tree, where one node is chosen as the root and nodes can be assigned depths/levels based on distance from the root. But levels are not required in the abstract definition of a tree as a graph; they arise from choosing a root and orientation for convenience in algorithms like BFS/DFS, heaps, and parse trees.
So, the relationship is: every tree is a graph with extra structure—specifically, no cycles and (typically) connectivity—and the “no cycles” rule is the key distinguishing feature. (Discrete Mathematics)
Which type of sorting algorithm starts at the first position and moves the pointer until the end of the list, determining the lowest value?
Options:
Selection sort
Incremental sort
Progressive sort
Pointer sort
Answer:
AExplanation:
Selection sort is the algorithm that repeatedly scans the unsorted portion of a list to find the lowest (or highest) value and then places it into its correct position in the sorted portion. It begins at the first index (position 0) and treats that as the boundary between sorted and unsorted regions. On the first pass, it moves a scanning pointer through the entire list to determine the minimum element and swaps it into position 0. On the second pass, it starts from position 1, scans to the end to find the next minimum, and swaps it into position 1. This continues until the list is sorted.
This matches the question’s description: “starts at the first position and moves the pointer until the end of the list, determining the lowest value.” Textbooks often describe selection sort with two indices: one for the current boundary position and one for scanning the remainder of the list to find the minimum. The algorithm is simple and uses O(1) extra space, but it is inefficient for large lists because it performs O(n²) comparisons regardless of input order.
The other options are not standard algorithm names in typical computer science curricula. While many sorting algorithms exist (insertion sort, merge sort, quicksort, heap sort), “incremental,” “progressive,” and “pointer sort” are not canonical textbook algorithms in this context. Therefore, the correct answer is selection sort.
What is the alternative way to access the third element of the first row in np_2d?
Options:
np_2d[1, 3]
np_2d[2, 0]
np_2d[3, 1]
np_2d[0, 2]
Answer:
DExplanation:
NumPy arrays use zero-based indexing, meaning counting starts at 0 rather than 1. In a 2D NumPy array, indexing is typically written in the form array[row_index, column_index]. The first index selects the row, and the second index selects the column. Therefore, the “first row” corresponds to row index 0. Within that row, the “third element” corresponds to column index 2, because the columns are indexed 0, 1, 2, 3, and so on.
So, np_2d[0, 2] directly selects the element at row 0 and column 2, which is the third element in the first row. This is considered an “alternative” to approaches like two-step indexing (np_2d[0][2]), and it is the standard idiom taught for multi-dimensional NumPy arrays.
The other choices point to different locations. np_2d[1, 3] is the fourth element of the second row, not the third element of the first row. np_2d[2, 0] and np_2d[3, 1] attempt to access the third or fourth row, which would often be out of bounds in a small 2-row example and would raise an IndexError. Correct indexing is a cornerstone of array programming because it determines which observation, feature, or matrix entry your computations will use.
What is the built-in data structure that implements a hash table in Python?
Options:
Tuple
List
Array
Dictionary
Answer:
DExplanation:
A hash table is a data structure that supports fast lookup, insertion, and deletion by using ahash functionto map keys to positions in an underlying storage structure. In Python, the built-in data structure that provides hash-table behavior is thedictionary, written with curly braces like {"a": 1, "b": 2}. Dictionaries store key–value pairs and are designed so that accessing a value by key, such as d["a"], is efficient on average. Textbooks typically describe this expected efficiency as average-case constant time, often written as O(1), assuming a good hash function and a well-managed table size.
Tuples and lists are sequence types. Lists provide indexed access by integer position, not hashing by arbitrary keys. Tuples are immutable sequences and likewise do not provide key-based hashing semantics. “Array” is not the core built-in mapping structure in Python; while Python has an array module and NumPy has arrays, neither is the built-in hash table abstraction for general key–value storage.
Python dictionaries require keys to be hashable, meaning the key’s hash value is stable during its lifetime (common examples: strings, numbers, tuples of hashable items). This requirement is directly tied to hash-table implementation. Dictionaries are used throughout computer science applications: symbol tables in interpreters, caches and memoization, frequency counting, indexing, and implementing graphs via adjacency maps.
What is an ndarray in Python?
Options:
A built-in Python data array used to store collections of items.
A native Python object that represents a tree-like hierarchical data structure.
An n-dimensional array object provided by the NumPy library.
A module that provides network socket functions similar to XML.
Answer:
CExplanation:
An ndarray is NumPy’s fundamental data structure: ann-dimensional arraydesigned for efficient numerical computation. The term stands for “N-dimensional array,” and it is implemented as numpy.ndarray. Unlike Python’s built-in list, an ndarray stores elements in a compact, homogeneous format defined by its dtype (such as integers or floating-point numbers). This uniform representation enables fast, vectorized operations and efficient use of memory, which is why ndarray is central in scientific computing and data analysis.
An ndarray supports multiple dimensions: a 1D array behaves like a vector, a 2D array like a matrix (rows and columns), and higher-dimensional arrays represent tensors. Textbooks emphasize that ndarray operations are typically element-wise by default (for example, a + b adds corresponding elements), and that slicing and broadcasting allow powerful computations without explicit loops. This approach is both expressive and efficient because the heavy lifting happens in optimized low-level code.
Option A is incorrect because ndarray is not built into core Python; it comes from NumPy. Option B describes a tree, which is a different data structure entirely. Option D is incorrect because sockets and XML-related functionality belong to other parts of Python’s standard library, not to NumPy or ndarray.
In short, an ndarray is the primary array object of NumPy, providing high-performance multi-dimensional numerical storage and computation.
What is a correct call to the linear search defined as def linear_search(customersList, search_value): ?
Options:
find_linear(customersList)
print(linear_search(customersList, search_value))
linear_search()(customersList)
search_linear(customersList, search_value)
Answer:
BExplanation:
A function definition in Python specifies a function name and a list of parameters. Here, def linear_search(customersList, search_value): defines a function named linear_search that requirestwo argumentswhen called: a list (or sequence) of customer items and the value being searched for. A correct call must therefore supply both arguments in the same order: linear_search(customersList, search_value). Option B is correct because it calls the function properly and then prints the returned result.
Textbooks describe linear search as scanning the list from the beginning to the end, comparing each element to search_value until a match is found or the list ends. The function typically returns an index (e.g., position of the match) or a Boolean, or possibly -1/None if not found. Wrapping the call in print(...) is a standard way to display the returned value for testing or demonstration.
Option A is incorrect because it calls a different function name, not linear_search. Option C is incorrect because linear_search() would attempt to call the function with zero arguments, which would raise a TypeError, and then it tries to call the result as if it were another function. Option D uses a different function name (search_linear) and also contains a spelling mismatch compared to the given definition.
What is a key advantage of using NumPy when handling large datasets?
Options:
Built-in machine learning algorithms
Automatic data cleaning
Efficient storage and computation
Interactive visualizations
Answer:
CExplanation:
NumPy’s key advantage for large datasets isefficient storage and fast computation. Unlike Python lists, which store references to objects and can have per-element overhead, NumPy arrays store data in a compact, homogeneous format (single dtype) in contiguous or strided memory. This reduces memory usage and improves cache locality, which is crucial for performance on large arrays. Additionally, NumPy operations are vectorized: many computations run in optimized compiled code rather than interpreted Python loops. This enables large speedups for arithmetic, linear algebra, statistics, and transformations over entire arrays.
Option A is incorrect because NumPy itself does not provide full machine learning algorithms; those are typically found in libraries like scikit-learn, though they build on NumPy. Option B is incorrect because NumPy does not automatically clean data; data cleaning is usually done with pandas or custom logic. Option D is incorrect because interactive visualizations are typically handled by libraries like matplotlib, seaborn, or plotly, not by NumPy.
Textbooks in scientific computing highlight that NumPy forms the computational foundation of the Python data ecosystem. Its array model supports broadcasting, slicing, and efficient aggregations, all of which are essential when working with millions of numeric values. By combining compact memory layout with compiled numerical kernels, NumPy enables scalable analysis and simulation workloads that would be slow or memory-heavy using pure Python lists.
Which principle can be used to implement an algorithm to calculate factorial or Fibonacci sequence?
Options:
Procedural programming
Iterative programming
Recursion programming
Object-oriented programming
Answer:
CExplanation:
Factorial and Fibonacci are classic examples used to teachrecursion, a technique where a function solves a problem by calling itself on smaller subproblems. The key requirement for recursion is (1) abase casethat stops further calls and (2) arecursive casethat reduces the problem size. For factorial, the definition is (n! = n \times (n-1)!) with base case (0! = 1) (or (1! = 1)). For Fibonacci, (F(n) = F(n-1) + F(n-2)) with base cases (F(0)=0) and (F(1)=1). These mathematical definitions map directly into recursive code, which is why textbooks frequently introduce recursion using these sequences.
While factorial and Fibonacci can also be computed iteratively, the question asks for the principle that can be used to implement such algorithms, and recursion is the canonical textbook answer. Recursion also connects to important CS topics: call stacks, activation records, and divide-and-conquer problem solving.
Option A (“procedural programming”) and option D (“object-oriented programming”) are broader paradigms rather than the specific technique used in the classic implementations. Option B (“iterative programming”) is a valid alternative approach, but the standard instructional principle highlighted for these particular examples is recursion. Textbooks also note that naive recursive Fibonacci is inefficient (exponential time) unless optimized with memoization or converted to an iterative or dynamic programming approach.