WGU Introduction to Cryptography HNO1 Questions and Answers
(What is the length of the Initialization Vector (IV) in WEP?)
Options:
24 bits
40 bits
48 bits
56 bits
Answer:
AExplanation:
WEP (Wired Equivalent Privacy) uses the RC4 stream cipher and combines a per-packet Initialization Vector (IV) with a shared secret key to form the RC4 seed for that packet’s keystream. The IV in WEP is 24 bits long and is transmitted in the clear as part of the 802.11 frame so the receiver can reconstruct the same per-packet RC4 key stream. The short IV space (2²⁴ possible values) is a major design weakness: on a busy network, IVs repeat frequently, causing keystream reuse. Because RC4 is a stream cipher, keystream reuse enables attackers to derive relationships between plaintexts and recover keys with statistical attacks (notably the Fluhrer, Mantin, and Shamir (FMS) family of attacks and related improvements). WEP also uses a CRC-32 integrity check (ICV) that is not cryptographically strong and is vulnerable to modification attacks. The 24-bit IV length is therefore a key reason WEP is considered insecure and has been replaced by WPA/WPA2 mechanisms that use stronger key mixing, larger nonces/IVs, and robust integrity protection.
(Which number of bits gets encrypted each time encryption is applied during stream encryption?)
Options:
1
40
192
256
Answer:
AExplanation:
In the classical definition, a stream cipher encrypts data in very small units—often described as one bit at a time—by combining plaintext with a keystream (commonly via XOR). While many practical stream ciphers operate on bytes or words for efficiency, the conceptual distinction compared to block ciphers is that stream encryption processes data as a continuous stream rather than fixed-size blocks. This is why the standard teaching answer is “1 bit” per application of the keystream. Block ciphers, by contrast, encrypt blocks like 64 bits (DES/3DES) or 128 bits (AES) in each invocation of the block primitive. Options like 40, 192, and 256 are not typical stream cipher “per-step” processing sizes; 40 and 256 are often associated with key sizes, and 192 could be a key size for AES, not an encryption granularity. The essential security requirement for stream ciphers is that the keystream must be unpredictable and never reused with the same key/nonce combination; otherwise XOR properties allow attackers to recover relationships between plaintexts. Thus, the best answer is 1.
(Employee A needs to send Employee B a symmetric key for confidential communication. Which key is used to encrypt the symmetric key?)
Options:
Employee A’s private key
Employee B’s public key
Employee A’s public key
Employee B’s private key
Answer:
BExplanation:
When securely distributing a symmetric key over an untrusted network, a common approach is hybrid cryptography: use asymmetric cryptography to protect the symmetric key, then use the symmetric key for bulk encryption. To ensure only Employee B can recover the symmetric key, Employee A encrypts (wraps) that symmetric key using Employee B’s public key. Because only Employee B should possess the matching private key, only B can decrypt the wrapped symmetric key. This is the same principle used in TLS key exchange (in older RSA key transport) and in secure email: encrypt the session key to the recipient’s public key. Encrypting the symmetric key with Employee A’s private key would not provide confidentiality—anyone with A’s public key could reverse it, and it functions more like a signature than encryption. Employee B’s private key should never be shared and is used only by B to decrypt. Therefore, for confidentiality of the shared symmetric key, the correct encryption key is Employee B’s public key.
(What makes the RC4 cipher unique compared to RC5 and RC6?)
Options:
Stream
Asymmetric
Symmetric
Block
Answer:
AExplanation:
RC4 is unique among the RC family listed because it is a stream cipher. It generates a pseudorandom keystream and encrypts data by XORing that keystream with plaintext bytes (and decryption is the same XOR operation). This differs from RC5 and RC6, which are block ciphers: they encrypt fixed-size blocks of data through multiple rounds of operations (such as modular addition, XOR, and rotations) using a secret key. The stream-cipher design means RC4 historically fit protocols where data arrives continuously (e.g., early wireless and web encryption) and where simple, fast software implementation was desired. However, stream ciphers demand careful handling of nonces/IVs to avoid keystream reuse; reuse can catastrophically leak plaintext relationships. RC4 also has well-documented statistical biases in its keystream, leading to practical attacks in protocols like WEP and later concerns in TLS, which is why RC4 has been deprecated in modern security standards. Still, from a classification standpoint, “stream” is the distinguishing characteristic versus RC5/RC6 being block ciphers.
(Which component is used to verify the integrity of a message?)
Options:
TKIP
HMAC
AES
IV
Answer:
BExplanation:
HMAC (Hash-based Message Authentication Code) is a standard mechanism used to verify both integrity and authenticity of a message when two parties share a secret key. It combines a cryptographic hash function (such as SHA-256) with a secret key in a structured way that resists common attacks on naïve keyed-hash constructions. The sender computes an HMAC tag over the message and transmits the message plus tag. The receiver recomputes the HMAC using the same shared secret key and compares the result; if the tag matches, the receiver can be confident the message was not modified in transit and that it came from someone who knows the shared key. AES is an encryption algorithm primarily providing confidentiality; it can provide integrity only when used in authenticated modes (e.g., GCM) but “AES” alone is not the integrity component. An IV helps randomize encryption but does not validate integrity. TKIP is a legacy WLAN protocol component, not the general integrity verifier. Therefore, the correct component for verifying message integrity among the options is HMAC.
(What is used to randomize the initial value when generating Initialization Vectors (IVs)?)
Options:
Key
Plaintext
Algorithm
Nonce
Answer:
DExplanation:
An IV (Initialization Vector) is a value used to ensure that encrypting identical plaintext under the same key produces different ciphertexts, preventing pattern leakage. In many secure designs, the IV must be unique (and often unpredictable) per encryption operation. A common way to ensure uniqueness is to incorporate a nonce—a “number used once.” A nonce can be random, pseudo-random, or a counter-based value depending on the mode and security requirements. For example, CTR mode uses a nonce combined with a counter to produce unique input blocks; GCM uses a nonce/IV to ensure unique authentication and encryption behavior. The encryption key should remain stable across many operations and should not be used as the “randomizer” for IV generation; mixing key material into IV creation in an ad hoc way can create reuse or correlation issues. Plaintext and algorithm do not provide the needed uniqueness property. The nonce concept is specifically about ensuring one-time uniqueness of the starting value so that IV reuse does not repeat keystream blocks (stream modes) or reveal plaintext equality (CBC/CTR). Therefore, the correct choice is Nonce.
(What is the maximum key size (in bits) supported by AES?)
Options:
128
192
256
512
Answer:
CExplanation:
AES supports three standardized key sizes: 128, 192, and 256 bits, with a fixed block size of 128 bits. The maximum of these supported key sizes is 256 bits (AES-256). Key size affects resistance to brute-force key search: larger keys exponentially increase the search space. In practice, AES-128 is already considered strong against brute force with contemporary computing capabilities, while AES-256 is often chosen for compliance requirements, conservative security margins, or to hedge against future advances. AES-512 is not part of the AES standard; if 512-bit keys are desired, systems typically use different constructions (like using AES-256 in certain key-derivation or wrapping schemes) rather than changing AES itself. Therefore, the correct maximum supported AES key size is 256 bits.
(Why should a forensic investigator create a hash of a victim’s hard drive and of the bitstream copy of the hard drive?)
Options:
To identify if someone opened the drive
To certify the information on the drive is correct
To establish who created the files on the drives
To verify that the drives are identical
Answer:
DExplanation:
In digital forensics, investigators must preserve evidence integrity and demonstrate an unbroken chain of custody. Creating a cryptographic hash (such as SHA-256) of the original drive and then hashing the forensic bitstream image provides a strong mathematical assurance that the copy is an exact, bit-for-bit replica. Because secure hash functions are designed so that any tiny change in data produces a dramatically different digest, matching hashes indicate the image contains identical data to the source at the time of acquisition. This is critical in legal and investigative contexts: analysis is performed on the copy, not the original, to avoid altering evidence. If the hashes match, the investigator can testify that the evidence examined is identical to what was collected, supporting admissibility and credibility. Hashing does not prove who created files, nor does it directly show whether someone “opened the drive”; it specifically validates the integrity and equivalence of the captured image. Therefore, hashing both artifacts is done to verify that the original and the bitstream copy are identical.
(Which technique involves spotting variations in encrypted data and plotting how the characters relate to standard English characters?)
Options:
Brute force
Frequency analysis
Known plaintext
Chosen ciphertext
Answer:
BExplanation:
Frequency analysis is a classical cryptanalysis technique that exploits predictable statistical patterns in natural language. In English, certain letters (like E, T, A, O, I, N) occur more frequently than others, and common digrams/trigrams (TH, HE, IN, ER) appear with recognizable distribution. When a cipher preserves character boundaries (as in many substitution ciphers), the ciphertext will also show frequency patterns—though mapped to different symbols. The analyst counts ciphertext character occurrences, compares the distribution to expected English letter frequencies, and infers likely plaintext mappings. “Spotting variations” refers to observing differences in how often symbols appear and using that to plot relationships between ciphertext and standard English. Brute force instead tries all keys; known-plaintext attacks rely on having plaintext–ciphertext pairs; chosen-ciphertext attacks involve decrypting attacker-selected ciphertexts. Those are different attack models. Frequency analysis is specifically about statistical correlation between ciphertext symbols and language characteristics, which is why it is effective against monoalphabetic substitution and weak polyalphabetic schemes with short periods.
(Which encryption process sends a list of cipher suites that are supported for encrypted communications?)
Options:
Forward secrecy
ServerHello
ClientHello
Integrity check
Answer:
CExplanation:
In the TLS handshake, the ClientHello message is the client’s opening negotiation message and includes the client’s supported cryptographic capabilities. A key part of ClientHello is the offered cipher suites list, which advertises combinations of key exchange, authentication, encryption, and integrity/AEAD algorithms the client is willing to use. The server responds with ServerHello, selecting one of the offered cipher suites (in TLS 1.2 and earlier) and confirming protocol parameters. Forward secrecy is a property achieved by using ephemeral key exchange (e.g., (EC)DHE), not a specific message that “sends a list.” “Integrity check” is a security goal/mechanism, not the negotiation step. While TLS 1.3 changes the structure of negotiation (cipher suite list still appears in ClientHello but only covers AEAD and hash; key exchange is negotiated via extensions), the fundamental idea remains: the client proposes supported cipher suites in ClientHello, and the server picks compatible parameters. Therefore, the process that sends the list of supported cipher suites is the ClientHello.
(An organization wants to digitally sign its software to guarantee the integrity of its source code. Which key should the customer use to decrypt the digest of the source code?)
Options:
Customer’s private key
Organization’s public key
Organization’s private key
Customer’s public key
Answer:
BExplanation:
When software is digitally signed, the organization computes a cryptographic hash (digest) of the software (or its manifest) and then signs that digest using the organization’s private key. Verification works in the opposite direction: the customer (verifier) uses the organization’s public key to validate the signature and recover/confirm the signed digest, then independently hashes the received software and compares the result. If the digests match and the signature validates under the public key, the customer has strong assurance that the software has not been altered since it was signed and that it was signed by the holder of the corresponding private key. The customer never needs the organization’s private key—sharing it would destroy security and enable forgery. Likewise, the customer’s own keys are irrelevant to verifying the publisher’s signature. The organization’s public key is typically delivered inside a certificate chain (code signing certificate) so the verifier can also validate publisher identity and trust. Therefore, the customer uses the organization’s public key for signature verification (often described as “decrypting” the signed digest).
(Two people want to communicate through secure email. The person creating the email wants to ensure only their friend can decrypt the email. Which key should the person creating the email use to encrypt the message?)
Options:
Sender’s public key
Recipient’s private key
Sender’s private key
Recipient’s public key
Answer:
DExplanation:
To ensure confidentiality so that only the intended recipient can decrypt an email, the sender must encrypt in a way that only the recipient can reverse. In public key cryptography, that means encrypting with the recipient’s public key. The recipient is the only party who should possess the matching private key, so only they can decrypt the ciphertext. This pattern is fundamental to PKI-based secure email systems such as S/MIME and OpenPGP: the sender looks up or is provided the recipient’s certificate/public key, encrypts the message (often by encrypting a randomly generated symmetric session key with the recipient’s public key), and the recipient uses their private key to recover the session key and decrypt the content. Encrypting with the sender’s private key would not provide confidentiality; it resembles signing because anyone with the sender’s public key could “decrypt” it. Encrypting with a private key of the recipient is also incorrect because private keys are not shared and should never leave the recipient’s control. Therefore, the correct key to encrypt the message so only the friend can decrypt it is the recipient’s public key.
(Which certificate encoding process is binary-based?)
Options:
Public Key Infrastructure (PKI)
Distinguished Encoding Rules (DER)
Rivest–Shamir–Adleman (RSA)
Privacy Enhanced Mail (PEM)
Answer:
BExplanation:
DER (Distinguished Encoding Rules) is a binary encoding format used to represent ASN.1 structures in a canonical, unambiguous way. X.509 certificates are defined using ASN.1, and DER provides a strict subset of BER (Basic Encoding Rules) that guarantees a single, unique encoding for any given data structure. That “unique encoding” property is important for cryptographic operations such as hashing and digital signatures, because different encodings of the same abstract data could otherwise produce different hashes and break signature verification. In contrast, PEM is not a binary encoding; it is essentially a Base64-encoded text wrapper around DER data, bounded by header/footer lines (e.g., “BEGIN CERTIFICATE”). PKI is an overall framework for certificate issuance, trust, and lifecycle management—not an encoding. RSA is an asymmetric algorithm used for encryption/signing, not a certificate encoding format. Therefore, the binary-based certificate encoding process among the options is DER.
(Which mechanism can be applied to protect the integrity of plaintext when using AES?)
Options:
RC4
Message Authentication Code (MAC)
RSA
Kerberos key sharing
Answer:
BExplanation:
AES by itself is a symmetric block cipher that provides confidentiality, but not guaranteed integrity unless used in an authenticated mode. To protect integrity of the plaintext (ensuring it has not been altered), a Message Authentication Code (MAC) can be applied. In the classic Encrypt-then-MAC pattern, the sender encrypts the plaintext with AES and then computes a MAC (often HMAC-SHA-256 or CMAC-AES) over the ciphertext (and relevant headers). The receiver verifies the MAC before attempting decryption, preventing tampering and many padding-oracle style vulnerabilities. Alternatively, AES can be used in an AEAD mode like AES-GCM, which produces an authentication tag serving a similar purpose, but among the listed options the general integrity mechanism is “MAC.” RC4 is an unrelated stream cipher and does not provide integrity. RSA is asymmetric and not the standard integrity add-on for AES-encrypted bulk data. Kerberos is an authentication protocol and key distribution system, not a message integrity primitive. Therefore, to protect plaintext integrity when using AES, the correct mechanism is a Message Authentication Code.
(What is the RC4 encryption key size when utilizing WPA with Temporal Key Integrity Protocol (TKIP)?)
Options:
40 bits
56 bits
128 bits
256 bits
Answer:
CExplanation:
WPA with TKIP was designed as an interim improvement over WEP while still using the RC4 stream cipher for compatibility with legacy hardware. TKIP addresses WEP’s major weaknesses by introducing per-packet key mixing, a message integrity mechanism (“Michael”), and replay protection. In TKIP, the encryption key used with RC4 is 128 bits. Practically, TKIP derives a per-packet RC4 key from a 128-bit temporal key (TK), the transmitter’s MAC address, and a sequence counter (TKIP Sequence Counter, TSC) to avoid the simple IV reuse patterns that made WEP easy to break. Even with these improvements, TKIP has known weaknesses and is deprecated in favor of WPA2/WPA3 using AES-based CCMP/GCMP. But strictly for the question asked, TKIP’s RC4 keying material is based on a 128-bit key size, not 40/56-bit legacy sizes and not 256-bit.
(Which type of exploit involves looking for different inputs that generate the same hash?)
Options:
Birthday attack
Linear cryptanalysis
Algebraic attack
Differential cryptanalysis
Answer:
AExplanation:
A birthday attack targets hash functions by exploiting the birthday paradox: collisions (two different inputs producing the same hash output) can be found much faster than brute-forcing a specific preimage. For an n-bit hash, the expected work to findanycollision is on the order of 2^(n/2), not 2^n. The attack is relevant because many security constructions rely on collision resistance—digital signatures, certificate fingerprints, integrity checks, and some commitment schemes. If an attacker can generate two different documents with the same hash, they may trick a signer into signing one version while later presenting the other as “signed,” depending on the protocol. Linear cryptanalysis and differential cryptanalysis are primarily techniques against block ciphers, analyzing relationships between plaintext/ciphertext differences or linear approximations across rounds. Algebraic attacks treat the cipher as a system of equations. The description “looking for different inputs that generate the same hash” is the hallmark of collision-finding, and the classic framing for that is the birthday attack.
(What is an alternative to using a Certificate Revocation List (CRL) with certificates?)
Options:
Privacy Enhanced Mail (PEM)
Online Certificate Status Protocol (OCSP)
Root Certificate Authority (CA)
Policy Certificate Authority (CA)
Answer:
BExplanation:
OCSP is the primary online alternative to CRLs for checking whether a certificate has been revoked. With a CRL, a relying party periodically downloads a list of revoked certificate serial numbers published by the issuing CA (or CRL distribution point). That approach can be bandwidth-heavy, introduces latency between revocation and client awareness, and can result in clients using stale revocation data if updates are infrequent. OCSP improves this by allowing a client (or a server on the client’s behalf) to query an OCSP responder in near real time about the status of a specific certificate (good, revoked, or unknown). In practice, many TLS deployments use OCSP stapling, where the server periodically fetches a signed OCSP response from the CA’s responder and “staples” it to the TLS handshake, reducing client-side network calls and improving privacy (the CA doesn’t learn which site the client is visiting). Thus, OCSP provides a more timely, certificate-specific revocation status mechanism than CRLs while preserving the CA’s signed assurance.
(A Linux user password is identified as follows:
$2a$08$AbCh0RCM8p8FGaYvRLI0H.Kng54gcnWCOQYIhas708UEZRQQjGBh4
Which hash algorithm should be used to salt this password?)
Options:
NTLM
SHA-512
MD5
bcrypt
Answer:
DExplanation:
The string format $2a$08$... is a well-known identifier for the bcrypt password hashing scheme. In common password-hash notation, the prefix indicates the algorithm and parameters: “$2a$” denotes bcrypt (version 2a), and “08” indicates the cost factor (work factor) controlling how computationally expensive hashing is. bcrypt is designed specifically for password storage: it includes a built-in salt and is intentionally slow and adaptive, making brute-force and GPU attacks far more expensive than fast general-purpose hashes like MD5 or SHA-512. NTLM and MD5 are obsolete for secure password storage due to speed and known weaknesses. SHA-512, while cryptographically strong as a hash, is still too fast for password hashing unless used in a dedicated password-hashing construction (e.g., PBKDF2, scrypt, Argon2) with appropriate parameters and salts. Since the given hash clearly matches bcrypt’s encoding, the correct algorithm is bcrypt, which incorporates salting and cost-based key stretching as part of its design.