Kubernetes and Cloud Native Associate Questions and Answers
Which statement about Secrets is correct?
Options:
A Secret is part of a Pod specification.
Secret data is encrypted with the cluster private key by default.
Secret data is base64 encoded and stored unencrypted by default.
A Secret can only be used for confidential data.
Answer:
CExplanation:
The correct answer is C. By default, Kubernetes Secrets store their data as base64-encoded values in the API (backed by etcd). Base64 is an encoding mechanism, not encryption, so this does not provide confidentiality. Unless you explicitly configure encryption at rest for etcd (via the API server encryption provider configuration) and secure access controls, Secret contents should be treated as potentially readable by anyone with sufficient API access or access to etcd backups.
Option A is misleading: a Secret is its own Kubernetes resource (kind: Secret). While Pods can reference Secrets (as environment variables or mounted volumes), the Secret itself is not “part of the Pod spec” as an embedded object. Option B is incorrect because Kubernetes does not automatically encrypt Secret data with a cluster private key by default; encryption at rest is optional and must be enabled. Option D is incorrect because Secrets can store a range of sensitive or semi-sensitive data (tokens, certs, passwords), but Kubernetes does not enforce “only confidential data” semantics; it’s a storage mechanism with size and format constraints.
Operationally, best practices include: enabling encryption at rest, limiting access via RBAC, avoiding broad “list/get secrets” permissions, using dedicated service accounts, auditing access, and considering external secrets managers (Vault, cloud KMS-backed solutions) for higher assurance. Also, don’t confuse “Secret” with “secure by default.” The default protection is mainly about avoiding accidental plaintext exposure in manifests, not about cryptographic security.
So the only correct statement in the options is C.
=========
Which of the following are tasks performed by a container orchestration tool?
Options:
Schedule, scale, and manage the health of containers.
Create images, scale, and manage the health of containers.
Debug applications, and manage the health of containers.
Store images, scale, and manage the health of containers.
Answer:
AExplanation:
A container orchestration tool (like Kubernetes) is responsible for scheduling, scaling, and health management of workloads, making A correct. Orchestration sits above individual containers and focuses on running applications reliably across a fleet of machines. Scheduling means deciding which node should run a workload based on resource requests, constraints, affinities, taints/tolerations, and current cluster state. Scaling means changing the number of running instances (replicas) to meet demand (manually or automatically through autoscalers). Health management includes monitoring whether containers and Pods are alive and ready, replacing failed instances, and maintaining the declared desired state.
Options B and D include “create images” and “store images,” which are not orchestration responsibilities. Image creation is a CI/build responsibility (Docker/BuildKit/build systems), and image storage is a container registry responsibility (Harbor, ECR, GCR, Docker Hub, etc.). Kubernetes consumes images from registries but does not build or store them. Option C includes “debug applications,” which is not a core orchestration function. While Kubernetes provides tools that help debugging (logs, exec, events), debugging is a human/operator activity rather than the orchestrator’s fundamental responsibility.
In Kubernetes specifically, these orchestration tasks are implemented through controllers and control loops: Deployments/ReplicaSets manage replica counts and rollouts, kube-scheduler assigns Pods to nodes, kubelet ensures containers run, and probes plus controller logic replace unhealthy replicas. This is exactly what makes Kubernetes valuable at scale: instead of manually starting/stopping containers on individual hosts, you declare your intent and let the orchestration system continually reconcile reality to match. That combination—placement + elasticity + self-healing—is the core of container orchestration, matching option A precisely.
=========
Which Kubernetes resource uses immutable: true boolean field?
Options:
Deployment
Pod
ConfigMap
ReplicaSet
Answer:
CExplanation:
The immutable: true field is supported by ConfigMap (and also by Secrets, though Secret is not in the options), so C is correct. When a ConfigMap is marked immutable, its data can no longer be changed after creation. This is useful for protecting configuration from accidental modification and for improving cluster performance by reducing watch/update churn on frequently referenced configuration objects.
In Kubernetes, ConfigMaps store non-sensitive configuration as key-value pairs. They can be consumed by Pods as environment variables, command-line arguments, or mounted files in volumes. Without immutability, ConfigMap updates can trigger complex runtime behaviors: for example, file-mounted ConfigMap updates can eventually reflect in the volume (with some delay), but environment variables do not update automatically in running Pods. This can cause confusion and configuration drift between expected and actual behavior. Marking a ConfigMap immutable makes the configuration stable and encourages explicit rollout strategies (create a new ConfigMap with a new name and update the Pod template), which is generally more reliable for production delivery.
Why the other options are wrong: Deployments, Pods, and ReplicaSets do not use an immutable: true field as a standard top-level toggle in their API schema for the purpose described. These objects can be updated through the normal API mechanisms, and their updates are part of typical lifecycle operations (rolling updates, scaling, etc.). The immutability concept exists in Kubernetes, but the specific immutable boolean in this context is a recognized field for ConfigMap (and Secret) objects.
Operationally, immutable ConfigMaps help enforce safer practices: instead of editing live configuration in place, teams adopt versioned configuration artifacts and controlled rollouts via Deployments. This fits cloud-native principles of repeatability and reducing accidental production changes.
=========
What is the main purpose of the Ingress in Kubernetes?
Options:
Access HTTP and HTTPS services running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their path.
Access HTTP and HTTPS services running in the cluster based on their path.
Answer:
DExplanation:
D is correct. Ingress is a Kubernetes API object that defines rules for external access to HTTP/HTTPS services in a cluster. The defining capability is Layer 7 routing—commonly host-based and path-based routing—so you can route requests like example.com/app1 to one Service and example.com/app2 to another. While the question mentions “based on their path,” that’s a classic and correct Ingress use case (and host routing is also common).
Ingress itself is only the specification of routing rules. An Ingress controller (e.g., NGINX Ingress Controller, HAProxy, Traefik, cloud-provider controllers) is what actually implements those rules by configuring a reverse proxy/load balancer. Ingress typically terminates TLS (HTTPS) and forwards traffic to internal Services, giving a more expressive alternative to exposing every service via NodePort/LoadBalancer.
Why the other options are wrong:
A suggests routing by IP address; Ingress is fundamentally about HTTP(S) routing rules (host/path), not direct Service IP access.
B and C describe non-HTTP protocols; Ingress is specifically for HTTP/HTTPS. For TCP/UDP or other protocols, you generally use Services of type LoadBalancer/NodePort, Gateway API implementations, or controller-specific TCP/UDP configuration.
Ingress is a foundational building block for cloud-native application delivery because it centralizes edge routing, enables TLS management, and supports gradual adoption patterns (multiple services under one domain). Therefore, the main purpose described here matches D.
=========
What is a Pod?
Options:
A networked application within Kubernetes.
A storage volume within Kubernetes.
A single container within Kubernetes.
A group of one or more containers within Kubernetes.
Answer:
DExplanation:
A Pod is the smallest deployable/schedulable unit in Kubernetes and consists of a group of one or more containers that are deployed together on the same node—so D is correct. The key idea is that Kubernetes schedules Pods, not individual containers. Containers in the same Pod share important runtime context: they share the same network namespace (one Pod IP and port space) and can share storage volumes defined at the Pod level. This is why a Pod is often described as a “logical host” for its containers.
Most Pods run a single container, but multi-container Pods are common for sidecar patterns. For example, an application container might run alongside a service mesh proxy sidecar, a log shipper, or a config reloader. Because these containers share localhost networking, they can communicate efficiently without exposing extra network endpoints. Because they can share volumes, one container can produce files that another consumes (for example, writing logs to a shared volume).
Options A and B are incorrect because a Pod is not “an application” abstraction nor is it a storage volume. Pods can host applications, but they are the execution unit for containers rather than the application concept itself. Option C is incorrect because a Pod is not limited to a single container; “one or more containers” is fundamental to the Pod definition.
Operationally, understanding Pods is essential because many Kubernetes behaviors key off Pods: Services select Pods (typically by labels), autoscalers scale Pods (replica counts), probes determine Pod readiness/liveness, and scheduling constraints place Pods on nodes. When a Pod is replaced (for example during a Deployment rollout), a new Pod is created with a new UID and potentially a new IP—reinforcing why Services exist to provide stable access.
Therefore, the verified correct answer is D: a Pod is a group of one or more containers within Kubernetes.
=========
Which of the following is the correct command to run an nginx deployment with 2 replicas?
Options:
kubectl run deploy nginx --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --replicas=2
kubectl create nginx deployment --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --count=2
Answer:
BExplanation:
The correct answer is B: kubectl create deploy nginx --image=nginx --replicas=2. This uses kubectl create deployment (shorthand create deploy) to generate a Deployment resource named nginx with the specified container image. The --replicas=2 flag sets the desired replica count, so Kubernetes will create two Pod replicas (via a ReplicaSet) and keep that number stable.
Option A is incorrect because kubectl run is primarily intended to run a Pod (and in older versions could generate other resources, but it’s not the recommended/consistent way to create a Deployment in modern kubectl usage). Option C is invalid syntax: kubectl subcommand order is incorrect; you don’t say kubectl create nginx deployment. Option D uses a non-existent --count flag for Deployment replicas.
From a Kubernetes fundamentals perspective, this question tests two ideas: (1) Deployments are the standard controller for running stateless workloads with a desired number of replicas, and (2) kubectl create deployment is a common imperative shortcut for generating that resource. After running the command, you can confirm with kubectl get deploy nginx, kubectl get rs, and kubectl get pods -l app=nginx (label may vary depending on kubectl version). You’ll see a ReplicaSet created and two Pods brought up.
In production, teams typically use declarative manifests (kubectl apply -f) or GitOps, but knowing the imperative command is useful for quick labs and validation. The key is that replicas are managed by the controller, not by manually starting containers—Kubernetes reconciles the state continuously.
Therefore, B is the verified correct command.
=========
CI/CD stands for:
Options:
Continuous Information / Continuous Development
Continuous Integration / Continuous Development
Cloud Integration / Cloud Development
Continuous Integration / Continuous Deployment
Answer:
DExplanation:
CI/CD is a foundational practice for delivering software rapidly and reliably, and it maps strongly to cloud native delivery workflows commonly used with Kubernetes. CI stands for Continuous Integration: developers merge code changes frequently into a shared repository, and automated systems build and test those changes to detect issues early. CD is commonly used to mean Continuous Delivery or Continuous Deployment depending on how far automation goes. In many certification contexts and simplified definitions like this question, CD is interpreted as Continuous Deployment, meaning every change that passes the automated pipeline is automatically released to production. That matches option D.
In a Kubernetes context, CI typically produces artifacts such as container images (built from Dockerfiles or similar build definitions), runs unit/integration tests, scans dependencies, and pushes images to a registry. CD then promotes those images into environments by updating Kubernetes manifests (Deployments, Helm charts, Kustomize overlays, etc.). Progressive delivery patterns (rolling updates, canary, blue/green) often use Kubernetes-native controllers and Service routing to reduce risk.
Why the other options are incorrect: “Continuous Development” isn’t the standard “D” term; it’s ambiguous and not the established acronym expansion. “Cloud Integration/Cloud Development” is unrelated. Continuous Delivery (in the stricter sense) means changes are always in a deployable state and releases may still require a manual approval step, while Continuous Deployment removes that final manual gate. But because the option set explicitly includes “Continuous Deployment,” and that is one of the accepted canonical expansions for CD, D is the correct selection here.
Practically, CI/CD complements Kubernetes’ declarative model: pipelines update desired state (Git or manifests), and Kubernetes reconciles it. This combination enables frequent releases, repeatability, reduced human error, and faster recovery through automated rollbacks and controlled rollout strategies.
=========
Which of the following cloud native proxies is used for ingress/egress in a service mesh and can also serve as an application gateway?
Options:
Frontend proxy
Kube-proxy
Envoy proxy
Reverse proxy
Answer:
CExplanation:
Envoy Proxy is a high-performance, cloud-native proxy widely used for ingress and egress traffic management in service mesh architectures, and it can also function as an application gateway. It is the foundational data-plane component for popular service meshes such as Istio, Consul, and AWS App Mesh, making option C the correct answer.
In a service mesh, Envoy is typically deployed as a sidecar proxy alongside each application Pod. This allows Envoy to transparently intercept and manage all inbound and outbound traffic for the service. Through this model, Envoy enables advanced traffic management features such as load balancing, retries, timeouts, circuit breaking, mutual TLS, and fine-grained observability without requiring application code changes.
Envoy is also commonly used at the mesh boundary to handle ingress and egress traffic. When deployed as an ingress gateway, Envoy acts as the entry point for external traffic into the mesh, performing TLS termination, routing, authentication, and policy enforcement. As an egress gateway, it controls outbound traffic from the mesh to external services, enabling security controls and traffic visibility. These capabilities allow Envoy to serve effectively as an application gateway, not just an internal proxy.
Option A, “Frontend proxy,” is a generic term and not a specific cloud-native component. Option B, kube-proxy, is responsible for implementing Kubernetes Service networking rules at the node level and does not provide service mesh features or gateway functionality. Option D, “Reverse proxy,” is a general architectural pattern rather than a specific cloud-native proxy implementation.
Envoy’s extensibility, performance, and deep integration with Kubernetes and service mesh control planes make it the industry-standard proxy for modern cloud-native networking. Its ability to function both as a sidecar proxy and as a centralized ingress or egress gateway clearly establishes Envoy proxy as the correct and verified answer.
What edge and service proxy tool is designed to be integrated with cloud native applications?
Options:
CoreDNS
CNI
gRPC
Envoy
Answer:
DExplanation:
The correct answer is D: Envoy. Envoy is a high-performance edge and service proxy designed for cloud-native environments. It is commonly used as the data plane in service meshes and modern API gateways because it provides consistent traffic management, observability, and security features across microservices without requiring every application to implement those capabilities directly.
Envoy operates at Layer 7 (application-aware) and supports protocols like HTTP/1.1, HTTP/2, gRPC, and more. It can handle routing, load balancing, retries, timeouts, circuit breaking, rate limiting, TLS termination, and mutual TLS (mTLS). Envoy also emits rich telemetry (metrics, access logs, tracing) that integrates well with cloud-native observability stacks.
Why the other options are incorrect:
CoreDNS (A) provides DNS-based service discovery within Kubernetes; it is not an edge/service proxy.
CNI (B) is a specification and plugin ecosystem for container networking (Pod networking), not a proxy.
gRPC (C) is an RPC protocol/framework used by applications; it’s not a proxy tool. (Envoy can proxy gRPC traffic, but gRPC itself isn’t the proxy.)
In Kubernetes architectures, Envoy often appears in two places: (1) at the edge as part of an ingress/gateway layer, and (2) sidecar proxies alongside Pods in a service mesh (like Istio) to standardize service-to-service communication controls and telemetry. This is why it is described as “designed to be integrated with cloud native applications”: it’s purpose-built for dynamic service discovery, resilient routing, and operational visibility in distributed systems.
So the verified correct choice is D (Envoy).
=========
What is the name of the Kubernetes resource used to expose an application?
Options:
Port
Service
DNS
Deployment
Answer:
BExplanation:
To expose an application running on Pods so that other components can reliably reach it, Kubernetes uses a Service, making B the correct answer. Pods are ephemeral: they can be recreated, rescheduled, and scaled, which means Pod IPs change. A Service provides a stable endpoint (virtual IP and DNS name) and load-balances traffic across the set of Pods selected by its label selector.
Services come in multiple forms. The default is ClusterIP, which exposes the application inside the cluster. NodePort exposes the Service on a static port on each node, and LoadBalancer (in supported clouds) provisions an external load balancer that routes traffic to the Service. ExternalName maps a Service name to an external DNS name. But across these variants, the abstraction is consistent: a Service defines how to access a logical group of Pods.
Option A (Port) is not a Kubernetes resource type; ports are fields within resources. Option C (DNS) is a supporting mechanism (CoreDNS creates DNS entries for Services), but DNS is not the resource you create to expose the app. Option D (Deployment) manages Pod replicas and rollouts, but it does not directly provide stable networking access; you typically pair a Deployment with a Service to expose it.
This is a core cloud-native pattern: controllers manage compute, Services manage stable connectivity, and higher-level gateways like Ingress provide L7 routing for HTTP/HTTPS. So, the Kubernetes resource used to expose an application is Service (B).
=========
What can be used to create a job that will run at specified times/dates or on a repeating schedule?
Options:
Job
CalendarJob
BatchJob
CronJob
Answer:
DExplanation:
The correct answer is D: CronJob. A Kubernetes CronJob is specifically designed for creating Jobs on a schedule—either at specified times/dates (expressed via cron syntax) or on a repeating interval (hourly, daily, weekly). When the schedule triggers, the CronJob controller creates a Job, and the Job controller creates the Pods that execute the workload to completion.
Option A (Job) is not inherently scheduled. A Job runs when you create it, and it continues until it completes successfully or fails according to its retry/backoff behavior. If you want it to run periodically, you need something else to create the Job each time. CronJob is the built-in mechanism for that scheduling.
Options B and C are not standard Kubernetes workload objects. Kubernetes does not include “CalendarJob” or “BatchJob” as official API kinds. The scheduling primitive is CronJob.
CronJobs also include important operational controls: concurrency policies prevent overlapping runs, deadlines control missed schedules, and history limits manage old Job retention. This makes CronJobs more robust than ad-hoc scheduling approaches and keeps the workload lifecycle visible in the Kubernetes API (status/events/logs). It also means you can apply standard Kubernetes patterns: use a service account with least privilege, mount Secrets/ConfigMaps, run in specific namespaces, and manage resource requests/limits so that scheduled workloads don’t destabilize the cluster.
So the correct Kubernetes resource for scheduled and repeating job execution is CronJob (D).
=========
Kubernetes ___ allows you to automatically manage the number of nodes in your cluster to meet demand.
Options:
Node Autoscaler
Cluster Autoscaler
Horizontal Pod Autoscaler
Vertical Pod Autoscaler
Answer:
BExplanation:
Kubernetes supports multiple autoscaling mechanisms, but they operate at different layers. The question asks specifically about automatically managing the number of nodes in the cluster, which is the role of the Cluster Autoscaler—therefore B is correct.
Cluster Autoscaler monitors the scheduling state of the cluster. When Pods are pending because there are not enough resources (CPU/memory) available on existing nodes—meaning the scheduler cannot place them—Cluster Autoscaler can request that the underlying infrastructure (typically a cloud provider node group / autoscaling group) add nodes. Conversely, when nodes are underutilized and Pods can be rescheduled elsewhere, Cluster Autoscaler can drain those nodes (respecting disruption constraints like PodDisruptionBudgets) and then remove them to reduce cost. This aligns with cloud-native elasticity: scale infrastructure up and down automatically based on workload needs.
The other options are different: Horizontal Pod Autoscaler (HPA) changes the number of Pod replicas for a workload (like a Deployment) based on metrics (CPU utilization, memory, or custom metrics). It scales the application layer, not the node layer. Vertical Pod Autoscaler (VPA) changes resource requests/limits (CPU/memory) for Pods, effectively “scaling up/down” the size of individual Pods. It also does not directly change node count, though its adjustments can influence scheduling pressure. “Node Autoscaler” is not the canonical Kubernetes component name used in standard terminology; the widely referenced upstream component for node count is Cluster Autoscaler.
In real systems, these autoscalers often work together: HPA increases replicas when traffic rises; that may cause Pods to go Pending if nodes are full; Cluster Autoscaler then adds nodes; scheduling proceeds; later, traffic drops, HPA reduces replicas and Cluster Autoscaler removes nodes. This layered approach provides both performance and cost efficiency.
=========
Which Kubernetes Service type exposes a service only within the cluster?
Options:
ClusterIP
NodePort
LoadBalancer
ExternalName
Answer:
AExplanation:
In Kubernetes, a Service provides a stable network endpoint for a set of Pods and abstracts away their dynamic nature. Kubernetes offers several Service types, each designed for different exposure requirements. Among these, ClusterIP is the Service type that exposes an application only within the cluster, making it the correct answer.
When a Service is created with the ClusterIP type, Kubernetes assigns it a virtual IP address that is reachable exclusively from within the cluster’s network. This IP is used by other Pods and internal components to communicate with the Service through cluster DNS or environment variables. External traffic from outside the cluster cannot directly access a ClusterIP Service, which makes it ideal for internal APIs, backend services, and microservices that should not be publicly exposed.
Option B (NodePort) is incorrect because NodePort exposes the Service on a static port on each node’s IP address, allowing access from outside the cluster. Option C (LoadBalancer) is incorrect because it provisions an external load balancer—typically through a cloud provider—to expose the Service publicly. Option D (ExternalName) is incorrect because it does not create a proxy or internal endpoint at all; instead, it maps the Service name to an external DNS name outside the cluster.
ClusterIP is also the default Service type in Kubernetes. If no type is explicitly specified in a Service manifest, Kubernetes automatically assigns it as ClusterIP. This default behavior reflects the principle of least exposure, encouraging internal-only access unless external access is explicitly required.
From a cloud native architecture perspective, ClusterIP Services are fundamental to building secure, scalable microservices systems. They enable internal service-to-service communication while reducing the attack surface by preventing unintended external access.
According to Kubernetes documentation, ClusterIP Services are intended for internal communication within the cluster and are not reachable from outside the cluster network. Therefore, ClusterIP is the correct and fully verified answer, making option A the right choice.
What is a Kubernetes service with no cluster IP address called?
Options:
Headless Service
Nodeless Service
IPLess Service
Specless Service
Answer:
AExplanation:
A Kubernetes Service normally provides a stable virtual IP (ClusterIP) and a DNS name that load-balances traffic across matching Pods. A headless Service is a special type of Service where Kubernetes does not allocate a ClusterIP. Instead, the Service’s DNS returns individual Pod IPs (or other endpoint records), allowing clients to connect directly to specific backends rather than through a single virtual IP. That is why the correct answer is A (Headless Service).
Headless Services are created by setting spec.clusterIP: None. When you do this, kube-proxy does not program load-balancing rules for a virtual IP because there isn’t one. Instead, service discovery is handled via DNS records that point to the actual endpoints. This behavior is especially important for stateful or identity-sensitive systems where clients must talk to a particular replica (for example, databases, leader/follower clusters, or StatefulSet members).
This is also why headless Services pair naturally with StatefulSets. StatefulSets provide stable network identities (pod-0, pod-1, etc.) and stable DNS names. The headless Service provides the DNS domain that resolves each Pod’s stable hostname to its IP, enabling peer discovery and consistent addressing even as Pods move between nodes.
The other options are distractors: “Nodeless,” “IPLess,” and “Specless” are not Kubernetes Service types. In the core API, the Service “types” are things like ClusterIP, NodePort, LoadBalancer, and ExternalName; “headless” is a behavioral mode achieved through the ClusterIP field.
In short: a headless Service removes the virtual IP abstraction and exposes endpoint-level discovery. It’s a deliberate design choice when load-balancing is not desired or when the application itself handles routing, membership, or sharding.
=========
Which statement about Ingress is correct?
Options:
Ingress provides a simple way to track network endpoints within a cluster.
Ingress is a Service type like NodePort and ClusterIP.
Ingress is a construct that allows you to specify how a Pod is allowed to communicate.
Ingress exposes routes from outside the cluster to Services in the cluster.
Answer:
DExplanation:
Ingress is the Kubernetes API resource for defining external HTTP/HTTPS routing into the cluster, so D is correct. An Ingress object specifies rules such as hostnames (e.g., app.example.com), URL paths (e.g., /api), and TLS configuration, mapping those routes to Kubernetes Services. This provides Layer 7 routing capabilities beyond what a basic Service offers.
Ingress is not a Service type (so B is wrong). Service types (ClusterIP, NodePort, LoadBalancer, ExternalName) are part of the Service API and operate at Layer 4. Ingress is a separate API object that depends on an Ingress Controller to actually implement routing. The controller watches Ingress resources and configures a reverse proxy/load balancer (like NGINX, HAProxy, or a cloud load balancer integration) to enforce the desired routing. Without an Ingress Controller, creating an Ingress object alone will not route traffic.
Option A describes endpoint tracking (that’s closer to Endpoints/EndpointSlice). Option C describes NetworkPolicy, which controls allowed network flows between Pods/namespaces. Ingress is about exposing and routing incoming application traffic from outside the cluster to internal Services.
So the verified correct statement is D: Ingress exposes routes from outside the cluster to Services in the cluster.
What is the primary mechanism to identify grouped objects in a Kubernetes cluster?
Options:
Custom Resources
Labels
Label Selector
Pod
Answer:
BExplanation:
Kubernetes groups and organizes objects primarily using labels, so B is correct. Labels are key-value pairs attached to objects (Pods, Deployments, Services, Nodes, etc.) and are intended to be used for identifying, selecting, and grouping resources in a flexible, user-defined way.
Labels enable many core Kubernetes behaviors. For example, a Service selects the Pods that should receive traffic by matching a label selector against Pod labels. A Deployment’s ReplicaSet similarly uses label selectors to determine which Pods belong to the replica set. Operators and platform tooling also rely on labels to group resources by application, environment, team, or cost center. This is why labeling is considered foundational Kubernetes hygiene: consistent labels make automation, troubleshooting, and governance easier.
A “label selector” (option C) is how you query/group objects based on labels, but the underlying primary mechanism is still the labels themselves. Without labels applied to objects, selectors have nothing to match. Custom Resources (option A) extend the API with new kinds, but they are not the primary grouping mechanism across the cluster. “Pod” (option D) is a workload unit, not a grouping mechanism.
Practically, Kubernetes recommends common label keys like app.kubernetes.io/name, app.kubernetes.io/instance, and app.kubernetes.io/part-of to standardize grouping. Those conventions improve interoperability with dashboards, GitOps tooling, and policy engines.
So, when the question asks for the primary mechanism used to identify grouped objects in Kubernetes, the most accurate answer is Labels (B)—they are the universal metadata primitive used to group and select resources.
=========
Which type of Service requires manual creation of Endpoints?
Options:
LoadBalancer
Services without selectors
NodePort
ClusterIP with selectors
Answer:
BExplanation:
A Kubernetes Service without selectors requires you to manage its backend endpoints manually, so B is correct. Normally, a Service uses a selector to match a set of Pods (by labels). Kubernetes then automatically maintains the backend list (historically Endpoints, now commonly EndpointSlice) by tracking which Pods match the selector and are Ready. This automation is one of the key reasons Services provide stable connectivity to dynamic Pods.
When you create a Service without a selector, Kubernetes has no way to know which Pods (or external IPs) should receive traffic. In that pattern, you explicitly create an Endpoints object (or EndpointSlices, depending on your approach and controller support) that maps the Service name to one or more IP:port tuples. This is commonly used to represent external services (e.g., a database running outside the cluster) while still providing a stable Kubernetes Service DNS name for in-cluster clients. Another use case is advanced migration scenarios where endpoints are controlled by custom controllers rather than label selection.
Why the other options are wrong: Service types like ClusterIP, NodePort, and LoadBalancer describe how a Service is exposed, but they do not inherently require manual endpoint management. A ClusterIP Service with selectors (D) is the standard case where endpoints are automatically created and updated. NodePort and LoadBalancer Services also typically use selectors and therefore inherit automatic endpoint management; the difference is in how traffic enters the cluster, not how backends are discovered.
Operationally, when using Services without selectors, you must ensure endpoint IPs remain correct, health is accounted for (often via external tooling), and you update endpoints when backends change. The key concept is: no selector → Kubernetes can’t auto-populate endpoints → you must provide them.
=========
Which of the following statements is correct concerning Open Policy Agent (OPA)?
Options:
The policies must be written in Python language.
Kubernetes can use it to validate requests and apply policies.
Policies can only be tested when published.
It cannot be used outside Kubernetes.
Answer:
BExplanation:
Open Policy Agent (OPA) is a general-purpose policy engine used to define and enforce policy across different systems. In Kubernetes, OPA is commonly integrated through admission control (often via Gatekeeper or custom admission webhooks) to validate and/or mutate requests before they are persisted in the cluster. This makes B correct: Kubernetes can use OPA to validate API requests and apply policy decisions.
Kubernetes’ admission chain is where policy enforcement naturally fits. When a user or controller submits a request (for example, to create a Pod), the API server can call external admission webhooks. Those webhooks can evaluate the request against policy—such as “no privileged containers,” “images must come from approved registries,” “labels must include cost-center,” or “Ingress must enforce TLS.” OPA’s policy language (Rego) allows expressing these rules in a declarative form, and the decision (“allow/deny” and sometimes patches) is returned to the API server. This enforces governance consistently and centrally.
Option A is incorrect because OPA policies are written in Rego, not Python. Option C is incorrect because policies can be tested locally and in CI pipelines before deployment; in fact, testability is a key advantage. Option D is incorrect because OPA is designed to be platform-agnostic—it can be used with APIs, microservices, CI/CD pipelines, service meshes, and infrastructure tools, not only Kubernetes.
From a Kubernetes fundamentals view, OPA complements RBAC: RBAC answers “who can do what to which resources,” while OPA-style admission policies answer “even if you can create this resource, does it meet our organizational rules?” Together they help implement defense in depth: authentication + authorization + policy admission + runtime security controls. That is why OPA is widely used to enforce security and compliance requirements in Kubernetes environments.
=========
Which of the following is a recommended security habit in Kubernetes?
Options:
Run the containers as the user with group ID 0 (root) and any user ID.
Disallow privilege escalation from within a container as the default option.
Run the containers as the user with user ID 0 (root) and any group ID.
Allow privilege escalation from within a container as the default option.
Answer:
BExplanation:
The correct answer is B. A widely recommended Kubernetes security best practice is to disallow privilege escalation inside containers by default. In Kubernetes Pod/Container security context, this is represented by allowPrivilegeEscalation: false. This setting prevents a process from gaining more privileges than its parent process—commonly via setuid/setgid binaries or other privilege-escalation mechanisms. Disallowing privilege escalation reduces the blast radius of a compromised container and aligns with least-privilege principles.
Options A and C are explicitly unsafe because they encourage running as root (UID 0 and/or GID 0). Running containers as root increases risk: if an attacker breaks out of the application process or exploits kernel/runtime vulnerabilities, having root inside the container can make privilege escalation and lateral movement easier. Modern Kubernetes security guidance strongly favors running as non-root (runAsNonRoot: true, explicit runAsUser), dropping Linux capabilities, using read-only root filesystems, and applying restrictive seccomp/AppArmor/SELinux profiles where possible.
Option D is the opposite of best practice. Allowing privilege escalation by default increases the attack surface and violates the idea of secure defaults.
Operationally, this habit is often enforced via admission controls and policies (e.g., Pod Security Admission in “restricted” mode, or policy engines like OPA Gatekeeper/Kyverno). It’s also important for compliance: many security baselines require containers to run as non-root and to prevent privilege escalation.
So, the recommended security habit among the choices is clearly B: Disallow privilege escalation.
=========
In Kubernetes, what is the primary purpose of using annotations?
Options:
To control the access permissions for users and service accounts.
To provide a way to attach metadata to objects.
To specify the deployment strategy for applications.
To define the specifications for resource limits and requests.
Answer:
BExplanation:
Annotations in Kubernetes are a flexible mechanism for attaching non-identifying metadata to Kubernetes objects. Their primary purpose is to store additional information that is not used for object selection or grouping, which makes Option B the correct answer.
Unlike labels, which are designed to be used for selection, filtering, and grouping of resources (for example, by Services or Deployments), annotations are intended purely for informational or auxiliary purposes. They allow users, tools, and controllers to store arbitrary key–value data on objects without affecting Kubernetes’ core behavior. This makes annotations ideal for storing data such as build information, deployment timestamps, commit hashes, configuration hints, or ownership details.
Annotations are commonly consumed by external tools and controllers rather than by the Kubernetes scheduler or control plane for decision-making. For example, ingress controllers, service meshes, monitoring agents, and CI/CD systems often read annotations to enable or customize specific behaviors. Because annotations are not used for querying or selection, Kubernetes places no strict size or structure requirements on their values beyond general object size limits.
Option A is incorrect because access permissions are managed using Role-Based Access Control (RBAC), which relies on roles, role bindings, and service accounts—not annotations. Option C is incorrect because deployment strategies (such as RollingUpdate or Recreate) are defined in the specification of workload resources like Deployments, not through annotations. Option D is also incorrect because resource limits and requests are specified explicitly in the Pod or container spec under the resources field.
In summary, annotations provide a powerful and extensible way to associate metadata with Kubernetes objects without influencing scheduling, selection, or identity. They support integration, observability, and operational tooling while keeping core Kubernetes behavior predictable and stable. This design intent is clearly documented in Kubernetes metadata concepts, making Option B the correct and verified answer.
In which framework do the developers no longer have to deal with capacity, deployments, scaling and fault tolerance, and OS?
Options:
Docker Swarm
Kubernetes
Mesos
Serverless
Answer:
DExplanation:
Serverless is the model where developers most directly avoid managing server capacity, OS operations, and much of the deployment/scaling/fault-tolerance mechanics, which is why D is correct. In serverless computing (commonly Function-as-a-Service, FaaS, and managed serverless container platforms), the provider abstracts away the underlying servers. You typically deploy code (functions) or a container image, define triggers (HTTP events, queues, schedules), and the platform automatically provisions the required compute, scales it based on demand, and handles much of the availability and fault tolerance behind the scenes.
It’s important to compare this to Kubernetes: Kubernetes does automate scheduling, self-healing, rolling updates, and scaling, but it still requires you (or your platform team) to design and operate cluster capacity, node pools, upgrades, runtime configuration, networking, and baseline reliability controls. Even in managed Kubernetes services, you still choose node sizes, scale policies, and operational configuration. Kubernetes reduces toil, but it does not eliminate infrastructure concerns in the same way serverless does.
Docker Swarm and Mesos are orchestration platforms that schedule workloads, but they also require managing the underlying capacity and OS-level aspects. They are not “no longer have to deal with capacity and OS” frameworks.
From a cloud native viewpoint, serverless is about consuming compute as an on-demand utility. Kubernetes can be a foundation for a serverless experience (for example, with event-driven autoscaling or serverless frameworks), but the pure framework that removes the most operational burden from developers is serverless.
A site reliability engineer needs to temporarily prevent new Pods from being scheduled on node-2 while keeping the existing workloads running without disruption. Which kubectl command should be used?
Options:
kubectl cordon node-2
kubectl delete node-2
kubectl drain node-2
kubectl pause deployment
Answer:
AExplanation:
In Kubernetes, node maintenance and availability are common operational tasks, and the platform provides specific commands to control how the scheduler places Pods on nodes. When the requirement is to temporarily prevent new Pods from being scheduled on a node without affecting the currently running Pods, the correct approach is to cordon the node.
The command kubectl cordon node-2 marks the node as unschedulable. This means the Kubernetes scheduler will no longer place any new Pods onto that node. Importantly, cordoning a node does not evict, restart, or interrupt existing Pods. All workloads already running on the node continue operating normally. This makes cordoning ideal for scenarios such as diagnostics, monitoring, or preparing for future maintenance while ensuring zero workload disruption.
Option B, kubectl delete node-2, is incorrect because deleting a node removes it entirely from the cluster. This action would cause Pods running on that node to be terminated and rescheduled elsewhere, resulting in disruption—exactly what the question specifies must be avoided.
Option C, kubectl drain node-2, is also incorrect in this context. Draining a node safely evicts Pods (except for certain exclusions like DaemonSets) and reschedules them onto other nodes. While drain is useful for maintenance and upgrades, it does not keep existing workloads running on the node, making it unsuitable here.
Option D, kubectl pause deployment, applies only to Deployments and merely pauses rollout updates. It does not affect node-level scheduling behavior and has no impact on where Pods are placed by the scheduler.
Therefore, the correct and verified answer is Option A: kubectl cordon node-2, which aligns with Kubernetes operational best practices and official documentation for non-disruptive node management.
In a Kubernetes cluster, which scenario best illustrates the use case for a StatefulSet?
Options:
A web application that requires multiple replicas for load balancing.
A service that routes traffic to various microservices in the cluster.
A background job that runs periodically and does not maintain state.
A database that requires persistent storage and stable network identities.
Answer:
DExplanation:
A StatefulSet is a Kubernetes workload API object specifically designed to manage stateful applications. Unlike Deployments or ReplicaSets, which are intended for stateless workloads, StatefulSets provide guarantees about the ordering, uniqueness, and persistence of Pods. These guarantees are critical for applications that rely on stable identities and durable storage, such as databases, message brokers, and distributed systems.
The defining characteristics of a StatefulSet include stable network identities, persistent storage, and ordered deployment and scaling. Each Pod created by a StatefulSet receives a unique and predictable name (for example, database-0, database-1), which remains consistent across Pod restarts. This stable identity is essential for stateful applications that depend on fixed hostnames for leader election, replication, or peer discovery. Additionally, StatefulSets are commonly used with PersistentVolumeClaims, ensuring that each Pod is bound to its own persistent storage that is retained even if the Pod is rescheduled or restarted.
Option A is incorrect because web applications that scale horizontally for load balancing are typically stateless and are best managed by Deployments, which allow Pods to be created and destroyed freely without preserving identity. Option B is incorrect because traffic routing to microservices is handled by Services or Ingress resources, not StatefulSets. Option C is incorrect because periodic background jobs that do not maintain state are better suited for Jobs or CronJobs.
Option D correctly represents the ideal use case for a StatefulSet. Databases require persistent data storage, stable network identities, and predictable startup and shutdown behavior. StatefulSets ensure that Pods are started, stopped, and updated in a controlled order, which helps maintain data consistency and application reliability. According to Kubernetes documentation, whenever an application requires stable identities, ordered deployment, and persistent state, a StatefulSet is the recommended and verified solution, making option D the correct answer.
What is ephemeral storage?
Options:
Storage space that need not persist across restarts.
Storage that may grow dynamically.
Storage used by multiple consumers (e.g., multiple Pods).
Storage that is always provisioned locally.
Answer:
AExplanation:
The correct answer is A: ephemeral storage is non-persistent storage whose data does not need to survive Pod restarts or rescheduling. In Kubernetes, ephemeral storage typically refers to storage tied to the Pod’s lifetime—such as the container writable layer, emptyDir volumes, and other temporary storage types. When a Pod is deleted or moved to a different node, that data is generally lost.
This is different from persistent storage, which is backed by PersistentVolumes and PersistentVolumeClaims and is designed to outlive individual Pod instances. Ephemeral storage is commonly used for caches, scratch space, temporary files, and intermediate build artifacts—data that can be recreated and is not the authoritative system of record.
Option B is incorrect because “may grow dynamically” describes an allocation behavior, not the defining characteristic of ephemeral storage. Option C is incorrect because multiple consumers is about access semantics (ReadWriteMany etc.) and shared volumes, not ephemerality. Option D is incorrect because ephemeral storage is not “always provisioned locally” in a strict sense; while many ephemeral forms are local to the node, the definition is about lifecycle and persistence guarantees, not necessarily physical locality.
Operationally, ephemeral storage is an important scheduling and reliability consideration. Pods can request/limit ephemeral storage similarly to CPU/memory, and nodes can evict Pods under disk pressure. Mismanaged ephemeral storage (logs written to the container filesystem, runaway temp files) can cause node disk exhaustion and cascading failures. Best practices include shipping logs off-node, using emptyDir intentionally with size limits where supported, and using persistent volumes for state that must survive restarts.
So, ephemeral storage is best defined as storage that does not need to persist across restarts/rescheduling, matching option A.
=========
Which of the following systems is NOT compatible with the CRI runtime interface standard?
(Typo corrected: “CRI-0” → “CRI-O”)
Options:
CRI-O
dockershim
systemd
containerd
Answer:
CExplanation:
Kubernetes uses the Container Runtime Interface (CRI) to support pluggable container runtimes. The kubelet talks to a CRI-compatible runtime via gRPC, and that runtime is responsible for pulling images and running containers. In this context, containerd and CRI-O are CRI-compatible container runtimes (or runtime stacks) used widely with Kubernetes, and dockershim historically served as a compatibility layer that allowed kubelet to talk to Docker Engine as if it were CRI (before dockershim was removed from kubelet in newer Kubernetes versions). That leaves systemd as the correct “NOT compatible with CRI” answer, so C is correct.
systemd is an init system and service manager for Linux. While it can be involved in how services (like kubelet) are started and managed on the host, it is not a container runtime implementing CRI. It does not provide CRI gRPC endpoints for kubelet, nor does it manage containers in the CRI sense.
The deeper Kubernetes concept here is separation of responsibilities: kubelet is responsible for Pod lifecycle at the node level, but it delegates “run containers” to a runtime via CRI. Runtimes like containerd and CRI-O implement that contract; Kubernetes can swap them without changing kubelet logic. Historically, dockershim translated kubelet’s CRI calls into Docker Engine calls. Even though dockershim is no longer part of kubelet, it was still “CRI-adjacent” in purpose and often treated as compatible in older curricula.
Therefore, among the provided options, systemd is the only one that is clearly not a CRI-compatible runtime system, making C correct.
=========
How does cert-manager integrate with Kubernetes resources to provide TLS certificates for an application?
Options:
It manages Certificate resources and Secrets that can be used by Ingress objects for TLS.
It replaces default Kubernetes API certificates with those from external authorities.
It updates kube-proxy configuration to ensure encrypted traffic between Services.
It injects TLS certificates directly into Pods when the workloads are deployed.
Answer:
AExplanation:
cert-manager is a widely adopted Kubernetes add-on that automates the management and lifecycle of TLS certificates in cloud native environments. Its primary function is to issue, renew, and manage certificates by integrating directly with Kubernetes-native resources, rather than modifying core cluster components or injecting certificates manually into workloads.
Option A correctly describes how cert-manager operates. cert-manager introduces Custom Resource Definitions (CRDs) such as Certificate, Issuer, and ClusterIssuer. These resources define how certificates should be requested and from which certificate authority they should be obtained, such as Let’s Encrypt or a private PKI. Once a certificate is successfully issued, cert-manager stores it in a Kubernetes Secret. These Secrets can then be referenced by Ingress resources, Gateway API resources, or directly by applications to enable TLS.
Option B is incorrect because cert-manager does not replace or interfere with Kubernetes API server certificates. The Kubernetes control plane manages its own internal certificates independently, and cert-manager is focused on application-level TLS, not control plane security.
Option C is incorrect because cert-manager does not interact with kube-proxy or manage service-to-service encryption. Traffic encryption between Services is typically handled by service meshes or application-level TLS configurations, not cert-manager.
Option D is incorrect because cert-manager does not inject certificates directly into Pods at deployment time. Instead, Pods consume certificates indirectly by mounting the Secrets created and maintained by cert-manager. This design aligns with Kubernetes best practices by keeping certificate management decoupled from application deployment logic.
According to Kubernetes and cert-manager documentation, cert-manager’s strength lies in its native integration with Kubernetes APIs and declarative workflows. By managing Certificate resources and automatically maintaining Secrets for use by Ingress or Gateway resources, cert-manager simplifies TLS management, reduces operational overhead, and improves security across cloud native application delivery pipelines. This makes option A the accurate and fully verified answer.
Which of these is a valid container restart policy?
Options:
On login
On update
On start
On failure
Answer:
DExplanation:
The correct answer is D: On failure. In Kubernetes, restart behavior is controlled by the Pod-level field spec.restartPolicy, with valid values Always, OnFailure, and Never. The option presented here (“On failure”) maps to Kubernetes’ OnFailure policy. This setting determines what the kubelet should do when containers exit:
Always: restart containers whenever they exit (typical for long-running services)
OnFailure: restart containers only if they exit with a non-zero status (common for batch workloads)
Never: do not restart containers (fail and leave it terminated)
So “On failure” is a valid restart policy concept and the only one in the list that matches Kubernetes semantics.
The other options are not Kubernetes restart policies. “On login,” “On update,” and “On start” are not recognized values and don’t align with how Kubernetes models container lifecycle. Kubernetes is declarative and event-driven: it reacts to container exit codes and controller intent, not user “logins.”
Operationally, choosing the right restart policy is important. For example, Jobs typically use restartPolicy: OnFailure or Never because the goal is completion, not continuous uptime. Deployments usually imply “Always” because the workload should keep serving traffic, and a crashed container should be restarted. Also note that controllers interact with restarts: a Deployment may recreate Pods if they fail readiness, while a Job counts completions and failures based on Pod termination behavior.
Therefore, among the options, the only valid (Kubernetes-aligned) restart policy is D.
=========
What happens if only a limit is specified for a resource and no admission-time mechanism has applied a default request?
Options:
Kubernetes will create the container but it will fail with CrashLoopBackOff.
Kubernetes does not allow containers to be created without request values, causing eviction.
Kubernetes copies the specified limit and uses it as the requested value for the resource.
Kubernetes chooses a random value and uses it as the requested value for the resource.
Answer:
CExplanation:
In Kubernetes, resource management for containers is based on requests and limits. Requests represent the minimum amount of CPU or memory required for scheduling decisions, while limits define the maximum amount a container is allowed to consume at runtime. Understanding how Kubernetes behaves when only a limit is specified is important for predictable scheduling and resource utilization.
If a container specifies a resource limit but does not explicitly specify a resource request, Kubernetes applies a well-defined default behavior. In this case, Kubernetes automatically sets the request equal to the specified limit. This behavior ensures that the scheduler has a concrete request value to use when deciding where to place the Pod. Without a request value, the scheduler would not be able to make accurate placement decisions, as scheduling is entirely request-based.
This defaulting behavior applies independently to each resource type, such as CPU and memory. For example, if a container sets a memory limit of 512Mi but does not define a memory request, Kubernetes treats the memory request as 512Mi as well. The same applies to CPU limits. As a result, the Pod is scheduled as if it requires the full amount of resources defined by the limit.
Option A is incorrect because specifying only a limit does not cause a container to crash or enter CrashLoopBackOff. CrashLoopBackOff is related to application failures, not resource specification defaults. Option B is incorrect because Kubernetes allows containers to be created without explicit requests, relying on defaulting behavior instead. Option D is incorrect because Kubernetes never assigns random values for resource requests.
This behavior is clearly defined in Kubernetes resource management documentation and is especially relevant when admission controllers like LimitRange are not applying default requests. While valid, relying solely on limits can reduce cluster efficiency, as Pods may reserve more resources than they actually need. Therefore, best practice is to explicitly define both requests and limits.
Thus, the correct and verified answer is Option C.
What is the Kubernetes abstraction that allows groups of Pods to be exposed inside a Kubernetes cluster?
Options:
Deployment
Daemon
Unit
Service
Answer:
DExplanation:
In Kubernetes, Pods are ephemeral by design. They can be created, destroyed, rescheduled, or replaced at any time, and each Pod receives its own IP address. Because of this dynamic nature, directly relying on Pod IPs for communication is unreliable. To solve this problem, Kubernetes provides the Service abstraction, which allows a stable way to expose and access a group of Pods inside (and sometimes outside) the cluster.
A Service defines a logical set of Pods using label selectors and provides a consistent virtual IP address and DNS name for accessing them. Even if individual Pods fail or are replaced, the Service remains stable, and traffic is automatically routed to healthy Pods that match the selector. This makes Services a fundamental building block for internal communication between applications within a Kubernetes cluster.
Deployments (Option A) are responsible for managing the lifecycle of Pods, including scaling, rolling updates, and self-healing. However, Deployments do not provide networking or exposure capabilities. They control how Pods run, not how they are accessed.
Option B, “Daemon,” is not a valid Kubernetes resource. The correct resource is a DaemonSet, which ensures that a copy of a Pod runs on each (or selected) node in the cluster. DaemonSets are used for node-level workloads like logging or monitoring agents, not for exposing Pods.
Option C, “Unit,” is not a Kubernetes concept at all and does not exist in Kubernetes architecture.
Services can be configured in different ways depending on access requirements, such as ClusterIP for internal access, NodePort or LoadBalancer for external access, and Headless Services for direct Pod discovery. Regardless of type, the core purpose of a Service is to expose a group of Pods in a stable and reliable way.
Therefore, the correct and verified answer is Option D: Service, which is the Kubernetes abstraction specifically designed to expose groups of Pods within a cluster.
Which option represents best practices when building container images?
Options:
Use multi-stage builds, use the latest tag for image version, and only install necessary packages.
Use multi-stage builds, pin the base image version to a specific digest, and install extra packages just in case.
Use multi-stage builds, pin the base image version to a specific digest, and only install necessary packages.
Avoid multi-stage builds, use the latest tag for image version, and install extra packages just in case.
Answer:
CExplanation:
Building secure, efficient, and reproducible container images is a core principle of cloud native application delivery. Kubernetes documentation and container security best practices emphasize minimizing image size, reducing attack surface, and ensuring deterministic builds. Option C fully aligns with these principles, making it the correct answer.
Multi-stage builds allow developers to separate the build environment from the runtime environment. Dependencies such as compilers, build tools, and temporary artifacts are used only in intermediate stages and excluded from the final image. This significantly reduces image size and limits the presence of unnecessary tools that could be exploited at runtime.
Pinning the base image to a specific digest ensures immutability and reproducibility. Tags such as latest can change over time, potentially introducing breaking changes or vulnerabilities without notice. By using a digest, teams guarantee that the same base image is used every time the image is built, which is essential for predictable behavior, security auditing, and reliable rollbacks.
Installing only necessary packages further reduces the attack surface. Every additional package increases the risk of vulnerabilities and expands the maintenance burden. Minimal images are faster to pull, quicker to start, and easier to scan for vulnerabilities. Kubernetes security guidance consistently recommends keeping container images as small and purpose-built as possible.
Option A is incorrect because using the latest tag undermines build determinism and traceability. Option B is incorrect because installing extra packages “just in case” contradicts the principle of minimalism and increases security risk. Option D is incorrect because avoiding multi-stage builds and installing unnecessary packages leads to larger, less secure images and is explicitly discouraged in cloud native best practices.
According to Kubernetes and CNCF security guidance, combining multi-stage builds, immutable image references, and minimal dependencies results in more secure, reliable, and maintainable container images. Therefore, option C represents the best and fully verified approach when building container images.
What does vertical scaling an application deployment describe best?
Options:
Adding/removing applications to meet demand.
Adding/removing node instances to the cluster to meet demand.
Adding/removing resources to applications to meet demand.
Adding/removing application instances of the same application to meet demand.
Answer:
CExplanation:
Vertical scaling means changing the resources allocated to a single instance of an application (more or less CPU/memory), which is why C is correct. In Kubernetes terms, this corresponds to adjusting container resource requests and limits (for CPU and memory). Increasing resources can help a workload handle more load per Pod by giving it more compute or memory headroom; decreasing can reduce cost and improve cluster packing efficiency.
This differs from horizontal scaling, which changes the number of instances (replicas). Option D describes horizontal scaling: adding/removing replicas of the same workload, typically managed by a Deployment and often automated via the Horizontal Pod Autoscaler (HPA). Option B describes scaling the infrastructure layer (nodes) which is cluster/node autoscaling (Cluster Autoscaler in cloud environments). Option A is not a standard scaling definition.
In practice, vertical scaling in Kubernetes can be manual (edit the Deployment resource requests/limits) or automated using the Vertical Pod Autoscaler (VPA), which can recommend or apply new requests based on observed usage. A key nuance is that changing requests/limits often requires Pod restarts to take effect, so vertical scaling is less “instant” than HPA and can disrupt workloads if not planned. That’s why many production teams prefer horizontal scaling for traffic-driven workloads and use vertical scaling to right-size baseline resources or address memory-bound/cpu-bound behavior.
From a cloud-native architecture standpoint, understanding vertical vs horizontal scaling helps you design for elasticity: use vertical scaling to tune per-instance capacity; use horizontal scaling for resilience and throughput; and combine with node autoscaling to ensure the cluster has sufficient capacity. The definition the question is testing is simple: vertical scaling = change resources per application instance, which is option C.
Kubernetes Secrets are specifically intended to hold confidential data. Which API object should be used to hold non-confidential data?
Options:
CNI
CSI
ConfigMaps
RBAC
Answer:
CExplanation:
In Kubernetes, different API objects are designed for different categories of configuration and operational data. Secrets are used to store sensitive information such as passwords, API tokens, and encryption keys. For data that is not confidential, Kubernetes provides the ConfigMap resource, making option C the correct answer.
ConfigMaps are intended to hold non-sensitive configuration data that applications need at runtime. Examples include application configuration files, feature flags, environment-specific settings, URLs, port numbers, and command-line arguments. ConfigMaps allow developers to decouple configuration from application code, which aligns with cloud-native and twelve-factor app principles. This separation makes applications more portable, easier to manage, and simpler to update without rebuilding container images.
ConfigMaps can be consumed by Pods in several ways: as environment variables, as command-line arguments, or as files mounted into a container’s filesystem. Because they are not designed for confidential data, ConfigMaps store values in plaintext and do not provide encryption by default. This is why sensitive data must always be stored in Secrets instead.
Option A, CNI (Container Network Interface), is a networking specification used to configure Pod networking and is unrelated to data storage. Option B, CSI (Container Storage Interface), is used for integrating external storage systems with Kubernetes and does not store configuration data. Option D, RBAC, defines authorization policies and access controls within the cluster and is not a data storage mechanism.
While both Secrets and ConfigMaps can technically be accessed in similar ways by Pods, Kubernetes clearly distinguishes their intended use cases based on data sensitivity. Using ConfigMaps for non-confidential data improves clarity, security posture, and maintainability of Kubernetes configurations.
Therefore, the correct and verified answer is Option C: ConfigMaps, which are explicitly designed to hold non-confidential configuration data in Kubernetes.
What is an important consideration when choosing a base image for a container in a Kubernetes deployment?
Options:
It should be minimal and purpose-built for the application to reduce attack surface and improve performance.
It should always be the latest version to ensure access to the newest features.
It should be the largest available image to ensure all dependencies are included.
It can be any existing image from the public repository without consideration of its contents.
Answer:
AExplanation:
Choosing an appropriate base image is a critical decision in building containerized applications for Kubernetes, as it directly impacts security, performance, reliability, and operational efficiency. A key best practice is to select a minimal, purpose-built base image, making option A the correct answer.
Minimal base images—such as distroless images or slim variants of common distributions—contain only the essential components required to run the application. By excluding unnecessary packages, shells, and utilities, these images significantly reduce the attack surface. Fewer components mean fewer potential vulnerabilities, which is especially important in Kubernetes environments where containers are often deployed at scale and exposed to dynamic network traffic.
Smaller images also improve performance and efficiency. They reduce image size, leading to faster image pulls, quicker Pod startup times, and lower network and storage overhead. This is particularly beneficial in large clusters or during frequent deployments, scaling events, or rolling updates. Kubernetes’ design emphasizes fast, repeatable deployments, and lightweight images align well with these goals.
Option B is incorrect because always using the latest image version can introduce instability or unexpected breaking changes. Kubernetes best practices recommend using explicitly versioned and tested images to ensure predictable behavior and reproducibility. Option C is incorrect because large images increase the attack surface, slow down deployments, and often include unnecessary dependencies that are never used by the application. Option D is incorrect because blindly using public images without inspecting their contents or provenance introduces serious security and compliance risks.
Kubernetes documentation and cloud-native security guidance consistently emphasize the principle of least privilege and minimalism in container images. A well-chosen base image supports secure defaults, faster operations, and easier maintenance, all of which are essential for running reliable workloads in production Kubernetes environments.
Therefore, the correct and verified answer is Option A.
What is the core functionality of GitOps tools like Argo CD and Flux?
Options:
They track production changes made by a human in a Git repository and generate a human-readable audit trail.
They replace human operations with an agent that tracks Git commands.
They automatically create pull requests when dependencies are outdated.
They continuously compare the desired state in Git with the actual production state and notify or act upon differences.
Answer:
DExplanation:
The defining capability of GitOps controllers such as Argo CD and Flux is continuous reconciliation: they compare the desired state stored in Git to the actual state in the cluster and then alert and/or correct drift, making D correct. In GitOps, Git becomes the single source of truth for declarative configuration (Kubernetes manifests, Helm charts, Kustomize overlays). The controller watches Git for changes and applies them, and it also watches the cluster for divergence.
This is more than “auditing human changes” (option A). GitOps does provide auditability because changes are made via commits and pull requests, but the core functionality is the reconciliation loop that keeps cluster state aligned with Git, including optional automated sync/remediation. Option B is not accurate because GitOps is not about tracking user Git commands; it’s about reconciling desired state definitions. Option C (automatically creating pull requests for outdated dependencies) is a useful feature in some tooling ecosystems, but it is not the central defining behavior of GitOps controllers.
In Kubernetes delivery terms, this approach improves reliability: rollouts become repeatable, configuration drift is detected, and recovery is simpler (reapply known-good state from Git). It also supports separation of duties: platform teams can control policies and base layers, while app teams propose changes via PRs.
So the verified statement is: GitOps tools continuously reconcile Git desired state with cluster actual state—exactly option D.
What does “Continuous Integration” mean?
Options:
The continuous integration and testing of code changes from multiple sources manually.
The continuous integration and testing of code changes from multiple sources via automation.
The continuous integration of changes from one environment to another.
The continuous integration of new tools to support developers in a project.
Answer:
BExplanation:
The correct answer is B: Continuous Integration (CI) is the practice of frequently integrating code changes from multiple contributors and validating them through automated builds and tests. The “continuous” part is about doing this often (ideally many times per day) and consistently, so integration problems are detected early instead of piling up until a painful merge or release window.
Automation is essential. CI typically includes steps like compiling/building artifacts, running unit and integration tests, executing linters, checking formatting, scanning dependencies for vulnerabilities, and producing build reports. This automation creates fast feedback loops that help developers catch regressions quickly and maintain a releasable main branch.
Option A is incorrect because manual integration/testing does not scale and undermines the reliability and speed that CI is meant to provide. Option C confuses CI with deployment promotion across environments (which is more aligned with Continuous Delivery/Deployment). Option D is unrelated: adding tools can support CI, but it isn’t the definition.
In cloud-native application delivery, CI is tightly coupled with containerization and Kubernetes: CI pipelines often build container images from source, run tests, scan images, sign artifacts, and push to registries. Those validated artifacts then flow into CD processes that deploy to Kubernetes using manifests, Helm, or GitOps controllers. Without CI, Kubernetes rollouts become riskier because you lack consistent validation of what you’re deploying.
So, CI is best defined as automated integration and testing of code changes from multiple sources, which matches option B.
=========
What is the default value for authorization-mode in Kubernetes API server?
Options:
--authorization-mode=RBAC
--authorization-mode=AlwaysAllow
--authorization-mode=AlwaysDeny
--authorization-mode=ABAC
Answer:
BExplanation:
The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server’s default authorization mode was AlwaysAllow, meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here is B.
However, it’s crucial to distinguish “default flag value” from “recommended configuration.” In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls—any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enabling RBAC (Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server’s historical default flag value is AlwaysAllow.
So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.
In short: AlwaysAllow is the API server’s default mode (answer B), but RBAC is the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.
=========
Which of the following is a challenge derived from running cloud native applications?
Options:
The operational costs of maintaining the data center of the company.
Cost optimization is complex to maintain across different public cloud environments.
The lack of different container images available in public image repositories.
The lack of services provided by the most common public clouds.
Answer:
BExplanation:
The correct answer is B. Cloud-native applications often run across multiple environments—different cloud providers, regions, accounts/projects, and sometimes hybrid deployments. This introduces real cost-management complexity: pricing models differ (compute types, storage tiers, network egress), discount mechanisms vary (reserved capacity, savings plans), and telemetry/charge attribution can be inconsistent. When you add Kubernetes, the abstraction layer can further obscure cost drivers because costs are incurred at the infrastructure level (nodes, disks, load balancers) while consumption happens at the workload level (namespaces, Pods, services).
Option A is less relevant because cloud-native adoption often reduces dependence on maintaining a private datacenter; many organizations adopt cloud-native specifically to avoid datacenter CapEx/ops overhead. Option C is generally untrue—public registries and vendor registries contain vast numbers of images; the challenge is more about provenance, security, and supply chain than “lack of images.” Option D is incorrect because major clouds offer abundant services; the difficulty is choosing among them and controlling cost/complexity, not a lack of services.
Cost optimization being complex is a recognized challenge because cloud-native architectures include microservices sprawl, autoscaling, ephemeral environments, and pay-per-use dependencies (managed databases, message queues, observability). Small misconfigurations can cause big bills: noisy logs, over-requested resources, unbounded HPA scaling, and egress-heavy architectures. That’s why practices like FinOps, tagging/labeling for allocation, and automated guardrails are emphasized.
So the best answer describing a real, common cloud-native challenge is B.
=========
Which control plane component is responsible for updating the node Ready condition if a node becomes unreachable?
Options:
The kube-proxy
The node controller
The kubectl
The kube-apiserver
Answer:
BExplanation:
The correct answer is B: the node controller. In Kubernetes, node health is monitored and reflected through Node conditions such as Ready. The Node Controller (a controller that runs as part of the control plane, within the controller-manager) is responsible for monitoring node heartbeats and updating node status when a node becomes unreachable or unhealthy.
Nodes periodically report status (including kubelet heartbeats) to the API server. The Node Controller watches these updates. If it detects that a node has stopped reporting within expected time windows, it marks the node condition Ready as Unknown (or otherwise updates conditions) to indicate the control plane can’t confirm node health. This status change then influences higher-level behaviors such as Pod eviction and rescheduling: after grace periods and eviction timeouts, Pods on an unhealthy node may be evicted so the workload can be recreated on healthy nodes (assuming a controller manages replicas).
Option A (kube-proxy) is a node component for Service traffic routing and does not manage node health conditions. Option C (kubectl) is a CLI client; it does not participate in control plane health monitoring. Option D (kube-apiserver) stores and serves Node status, but it doesn’t decide when a node is unreachable; it persists what controllers and kubelets report. The “decision logic” for updating the Ready condition in response to missing heartbeats is the Node Controller’s job.
So, the component that updates the Node Ready condition when a node becomes unreachable is the node controller, which is option B.
=========
Which of the following characteristics is associated with container orchestration?
Options:
Application message distribution
Dynamic scheduling
Deploying application JAR files
Virtual machine distribution
Answer:
BExplanation:
A core capability of container orchestration is dynamic scheduling, so B is correct. Orchestration platforms (like Kubernetes) are responsible for deciding where containers (packaged as Pods in Kubernetes) should run, based on real-time cluster conditions and declared requirements. “Dynamic” means the system makes placement decisions continuously as workloads are created, updated, or fail, and as cluster capacity changes.
In Kubernetes, the scheduler evaluates Pods that have no assigned node, filters nodes that don’t meet requirements (resources, taints/tolerations, affinity/anti-affinity, topology constraints), and then scores remaining nodes to pick the best target. This scheduling happens at runtime and adapts to the current state of the cluster. If nodes go down or Pods crash, controllers create replacements and the scheduler places them again—another aspect of dynamic orchestration.
The other options don’t define container orchestration: “application message distribution” is more about messaging systems or service communication patterns, not orchestration. “Deploying application JAR files” is a packaging/deployment detail relevant to Java apps but not a defining orchestration capability. “Virtual machine distribution” refers to VM management rather than container orchestration; Kubernetes focuses on containers and Pods (even if those containers sometimes run in lightweight VMs via sandbox runtimes).
So, the defining trait here is that an orchestrator automatically and continuously schedules and reschedules workloads, rather than relying on static placement decisions.
What is the purpose of the CRI?
Options:
To provide runtime integration control when multiple runtimes are used.
Support container replication and scaling on nodes.
Provide an interface allowing Kubernetes to support pluggable container runtimes.
Allow the definition of dynamic resource criteria across containers.
Answer:
CExplanation:
The Container Runtime Interface (CRI) exists so Kubernetes can support pluggable container runtimes behind a stable interface, which makes C correct. In Kubernetes, the kubelet is responsible for managing Pods on a node, but it does not implement container execution itself. Instead, it delegates container lifecycle operations (pull images, create pod sandbox, start/stop containers, fetch logs, exec/attach streaming) to a container runtime through a well-defined API. CRI is that API contract.
Because of CRI, Kubernetes can run with different container runtimes—commonly containerd or CRI-O—without changing kubelet core logic. This improves portability and keeps Kubernetes modular: runtime innovation can happen independently while Kubernetes retains a consistent operational model. CRI is accessed via gRPC and defines the services and message formats kubelet uses to communicate with runtimes.
Option B is incorrect because replication and scaling are handled by controllers (Deployments/ReplicaSets) and schedulers, not by CRI. Option D is incorrect because resource criteria (requests/limits) are expressed in Pod specs and enforced via OS mechanisms (cgroups) and kubelet/runtime behavior, but CRI is not “for defining dynamic resource criteria.” Option A is vague and not the primary statement; while CRI enables runtime integration, its key purpose is explicitly to make runtimes pluggable and interoperable.
This design became even more important as Kubernetes moved away from Docker Engine integration (dockershim removal from kubelet). With CRI, Kubernetes focuses on orchestrating Pods, while runtimes focus on executing containers. That separation of responsibilities is a core container orchestration principle and is exactly what the question is testing.
So the verified answer is C.
=========
What factors influence the Kubernetes scheduler when it places Pods on nodes?
Options:
Pod memory requests, node taints, and Pod affinity.
Pod labels, node labels, and request labels.
Node taints, node level, and Pod priority.
Pod priority, container command, and node labels.
Answer:
AExplanation:
The Kubernetes scheduler chooses a node for a Pod by evaluating scheduling constraints and cluster state. Key inputs include resource requests (CPU/memory), taints/tolerations, and affinity/anti-affinity rules. Option A directly names three real, high-impact scheduling factors—Pod memory requests, node taints, and Pod affinity—so A is correct.
Resource requests are fundamental: the scheduler must ensure the target node has enough allocatable CPU/memory to satisfy the Pod’s requests. Requests (not limits) drive placement decisions. Taints on nodes repel Pods unless the Pod has a matching toleration, which is commonly used to reserve nodes for special workloads (GPU nodes, system nodes, restricted nodes) or to protect nodes under certain conditions. Affinity and anti-affinity allow expressing “place me near” or “place me away” rules—e.g., keep replicas spread across failure domains or co-locate components for latency.
Option B includes labels, which do matter, but “request labels” is not a standard scheduler concept; labels influence scheduling mainly through selectors and affinity, not as a direct category called “request labels.” Option C mixes a real concept (taints, priority) with “node level,” which isn’t a standard scheduling factor term. Option D includes “container command,” which does not influence scheduling; the scheduler does not care what command the container runs, only placement constraints and resources.
Under the hood, kube-scheduler uses a two-phase process (filtering then scoring) to select a node, but the inputs it filters/scores include exactly the kinds of constraints in A. Therefore, the verified best answer is A.
=========
What are the advantages of adopting a GitOps approach for your deployments?
Options:
Reduce failed deployments, operational costs, and fragile release processes.
Reduce failed deployments, configuration drift, and fragile release processes.
Reduce failed deployments, operational costs, and learn git.
Reduce failed deployments, configuration drift and improve your reputation.
Answer:
BExplanation:
The correct answer is B: GitOps helps reduce failed deployments, reduce configuration drift, and reduce fragile release processes. GitOps is an operating model where Git is the source of truth for declarative configuration (Kubernetes manifests, Helm releases, Kustomize overlays). A GitOps controller (like Flux or Argo CD) continuously reconciles the cluster’s actual state to match what’s declared in Git. This creates a stable, repeatable deployment pipeline and minimizes “snowflake” environments.
Reducing failed deployments: changes go through pull requests, code review, automated checks, and controlled merges. Deployments become predictable because the controller applies known-good, versioned configuration rather than ad-hoc manual commands. Rollbacks are also simpler—reverting a Git commit returns the cluster to the prior desired state.
Reducing configuration drift: without GitOps, clusters often drift because humans apply hotfixes directly in production or because different environments diverge over time. With GitOps, the controller detects drift and either alerts or automatically corrects it, restoring alignment with Git.
Reducing fragile release processes: releases become standardized and auditable. Git history provides an immutable record of who changed what and when. Promotion between environments becomes systematic (merge/branch/tag), and the same declarative artifacts are used consistently.
The other options include items that are either not the primary GitOps promise (like “learn git”) or subjective (“improve your reputation”). Operational cost reduction can happen indirectly through fewer incidents and more automation, but the most canonical and direct GitOps advantages in Kubernetes delivery are reliability and drift control—captured precisely in B.
=========
How many different Kubernetes service types can you define?
Options:
2
3
4
5
Answer:
CExplanation:
Kubernetes defines four primary Service types, which is why C (4) is correct. The commonly recognized Service spec.type values are:
ClusterIP: The default type. Exposes the Service on an internal virtual IP reachable only within the cluster. This supports typical east-west traffic between workloads.
NodePort: Exposes the Service on a static port on each node. Traffic to
LoadBalancer: Integrates with a cloud provider (or load balancer implementation) to provision an external load balancer and route traffic to the Service. This is common in managed Kubernetes.
ExternalName: Maps the Service name to an external DNS name via a CNAME record, allowing in-cluster clients to use a consistent Service DNS name to reach an external dependency.
Some people also talk about “Headless Services,” but headless is not a separate type; it’s a behavior achieved by setting clusterIP: None. Headless Services still use the Service API object but change DNS and virtual-IP behavior to return endpoint IPs directly rather than a ClusterIP. That’s why the canonical count of “Service types” is four.
This question tests understanding of the Service abstraction: Service type controls how a stable service identity is exposed (internal VIP, node port, external LB, or DNS alias), while selectors/endpoints control where traffic goes (the backend Pods). Different environments will favor different types: ClusterIP for internal microservices, LoadBalancer for external exposure in cloud, NodePort for bare-metal or simple access, ExternalName for bridging to outside services.
Therefore, the verified answer is C (4).
=========
Which of the following is a valid PromQL query?
Options:
SELECT * from http_requests_total WHERE job=apiserver
http_requests_total WHERE (job="apiserver")
SELECT * from http_requests_total
http_requests_total(job="apiserver")
Answer:
DExplanation:
Prometheus Query Language (PromQL) uses a function-and-selector syntax, not SQL. A valid query typically starts with a metric name and optionally includes label matchers in curly braces. In the simplified quiz syntax given, the valid PromQL-style selector is best represented by D: http_requests_total(job="apiserver"), so D is correct.
Conceptually, what this query means is “select time series for the metric http_requests_total where the job label equals apiserver.” In standard PromQL formatting you most often see this as: http_requests_total{job="apiserver"}. Many training questions abbreviate braces and focus on the idea of filtering by labels; the key is that PromQL uses label matchers rather than SQL WHERE clauses.
Options A and C are invalid because they use SQL (SELECT * FROM ...) which is not PromQL. Option B is also invalid because PromQL does not use the keyword WHERE. PromQL filtering is done by applying label matchers directly to the metric selector.
In Kubernetes observability, PromQL is central to building dashboards and alerts from cluster metrics. For example, you might compute rates from counters: rate(http_requests_total{job="apiserver"}[5m]), aggregate by labels: sum by (code) (...), or alert on error ratios. Understanding the selector and label-matcher model is foundational because Prometheus metrics are multi-dimensional—labels define the slices you can filter and aggregate on.
So, within the provided options, D is the only one that follows PromQL’s metric+label-filter style and therefore is the verified correct answer.
=========
Imagine you're releasing open-source software for the first time. Which of the following is a valid semantic version?
Options:
1.0
2021-10-11
0.1.0-rc
v1beta1
Answer:
CExplanation:
Semantic Versioning (SemVer) follows the pattern MAJOR.MINOR.PATCH with optional pre-release identifiers (e.g., -rc, -alpha.1) and build metadata. Among the options, 0.1.0-rc matches SemVer rules, so C is correct.
0.1.0-rc breaks down as: MAJOR=0, MINOR=1, PATCH=0, and -rc indicates a pre-release (“release candidate”). Pre-release versions are valid SemVer and are explicitly allowed to denote versions that are not yet considered stable. For a first-time open-source release, 0.x.y is common because it signals the API may still change in backward-incompatible ways before reaching 1.0.0.
Why the other options are not correct SemVer as written:
1.0 is missing the PATCH segment; SemVer requires three numeric components (e.g., 1.0.0).
2021-10-11 is a date string, not MAJOR.MINOR.PATCH.
v1beta1 resembles Kubernetes API versioning conventions, not SemVer.
In cloud-native delivery and Kubernetes ecosystems, SemVer matters because it communicates compatibility. Incrementing MAJOR indicates breaking changes, MINOR indicates backward-compatible feature additions, and PATCH indicates backward-compatible bug fixes. Pre-release tags allow releasing candidates for testing without claiming full stability. This is especially useful for open-source consumers and automation systems that need consistent version comparison and upgrade planning.
So, the only valid semantic version in the choices is 0.1.0-rc, option C.
=========
How to load and generate data required before the Pod startup?
Options:
Use an init container with shared file storage.
Use a PVC volume.
Use a sidecar container with shared volume.
Use another Pod with a PVC.
Answer:
AExplanation:
The Kubernetes-native mechanism to run setup steps before the main application containers start is an init container, so A is correct. Init containers run sequentially and must complete successfully before the regular containers in the Pod are started. This makes them ideal for preparing configuration, downloading artifacts, performing migrations, generating files, or waiting for dependencies.
The question specifically asks how to “load and generate data required before Pod startup.” The most common pattern is: an init container writes files into a shared volume (like an emptyDir volume) mounted by both the init container and the app container. When the init container finishes, the app container starts and reads the generated files. This is deterministic and aligns with Kubernetes Pod lifecycle semantics.
A sidecar container (option C) runs concurrently with the main container, so it is not guaranteed to complete work before startup. Sidecars are great for ongoing concerns (log shipping, proxies, config reloaders), but they are not the primary “before startup” mechanism. A PVC volume (option B) is just storage; it doesn’t itself perform generation or ensure ordering. “Another Pod with a PVC” (option D) introduces coordination complexity and still does not guarantee the data is prepared before this Pod starts unless you build additional synchronization.
Init containers are explicitly designed for this kind of pre-flight work, and Kubernetes guarantees ordering: all init containers complete in order, then the app containers begin. That guarantee is why A is the best and verified answer.
Which of these components is part of the Kubernetes Control Plane?
Options:
CoreDNS
cloud-controller-manager
kube-proxy
kubelet
Answer:
BExplanation:
The Kubernetes control plane is the set of components responsible for making cluster-wide decisions (like scheduling) and detecting and responding to cluster events (like starting new Pods when they fail). In upstream Kubernetes architecture, the canonical control plane components include kube-apiserver, etcd, kube-scheduler, and kube-controller-manager, and—when running on a cloud provider—the cloud-controller-manager. That makes option B the correct answer: cloud-controller-manager is explicitly a control plane component that integrates Kubernetes with the underlying cloud.
The cloud-controller-manager runs controllers that talk to cloud APIs for infrastructure concerns such as node lifecycle, routes, and load balancers. For example, when you create a Service of type LoadBalancer, a controller in this component is responsible for provisioning a cloud load balancer and updating the Service status. This is clearly control-plane behavior: reconciling desired state into real infrastructure state.
Why the others are not control plane components (in the classic classification): kubelet is a node component (agent) responsible for running and managing Pods on a specific node. kube-proxy is also a node component that implements Service networking rules on nodes. CoreDNS is usually deployed as a cluster add-on for DNS-based service discovery; it’s critical, but it’s not a control plane component in the strict architectural list.
So, while many clusters run CoreDNS in kube-system, the Kubernetes component that is definitively “part of the control plane” among these choices is cloud-controller-manager (B).
=========
What’s the difference between a security profile and a security context?
Options:
Security Contexts configure Clusters and Namespaces at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
Security Contexts configure Pods and Containers at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
Security Profiles configure Pods and Containers at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
Security Profiles configure Clusters and Namespaces at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
Answer:
BExplanation:
The correct answer is B. In Kubernetes, a securityContext is part of the Pod and container specification that configures runtime security settings for that workload—things like runAsUser, runAsNonRoot, Linux capabilities, readOnlyRootFilesystem, allowPrivilegeEscalation, SELinux options, seccomp profile selection, and filesystem group (fsGroup). These settings directly affect how the Pod’s containers run on the node.
A security profile, in contrast, is a higher-level policy/standard enforced by the cluster control plane (typically via admission control) to ensure workloads meet required security constraints. In modern Kubernetes, this concept aligns with mechanisms like Pod Security Standards (Privileged, Baseline, Restricted) enforced through Pod Security Admission. The “profile” defines what is allowed or forbidden (for example, disallow privileged containers, disallow hostPath mounts, require non-root, restrict capabilities). The control plane enforces these constraints by validating or rejecting Pod specs that do not comply—ensuring consistent security posture across namespaces and teams.
Option A and D are incorrect because security contexts do not “configure clusters and namespaces at runtime”; security contexts apply to Pods/containers. Option C reverses the relationship: security profiles don’t configure Pods at runtime; they constrain what security context settings (and other fields) are acceptable.
Practically, you can think of it as:
SecurityContext = workload-level configuration knobs (declared in manifests, applied at runtime).
SecurityProfile/Standards = cluster-level guardrails that determine which knobs/settings are permitted.
This separation supports least privilege: developers declare needed runtime settings, and cluster governance ensures those settings stay within approved boundaries. Therefore, B is the verified answer.
=========
What are the most important resources to guarantee the performance of an etcd cluster?
Options:
CPU and disk capacity.
Network throughput and disk I/O.
CPU and RAM memory.
Network throughput and CPU.
Answer:
BExplanation:
etcd is the strongly consistent key-value store backing Kubernetes cluster state. Its performance directly affects the entire control plane because most API operations require reads/writes to etcd. The most critical resources for etcd performance are disk I/O (especially latency) and network throughput/latency between etcd members and API servers—so B is correct.
etcd is write-ahead-log (WAL) based and relies heavily on stable, low-latency storage. Slow disks increase commit latency, which slows down object updates, watches, and controller loops. In busy clusters, poor disk performance can cause request backlogs and timeouts, showing up as slow kubectl operations and delayed controller reconciliation. That’s why production guidance commonly emphasizes fast SSD-backed storage and careful monitoring of fsync latency.
Network performance matters because etcd uses the Raft consensus protocol. Writes must be replicated to a quorum of members, and leader-follower communication is continuous. High network latency or low throughput can slow replication and increase the time to commit writes. Unreliable networking can also cause leader elections or cluster instability, further degrading performance and availability.
CPU and memory are still relevant, but they are usually not the first bottleneck compared to disk and network. CPU affects request processing and encryption overhead if enabled, while memory affects caching and compaction behavior. Disk “capacity” alone (size) is less relevant than disk I/O characteristics (latency, IOPS), because etcd performance is sensitive to fsync and write latency.
In Kubernetes operations, ensuring etcd health includes: using dedicated fast disks, keeping network stable, enabling regular compaction/defragmentation strategies where appropriate, sizing correctly (typically odd-numbered members for quorum), and monitoring key metrics (commit latency, fsync duration, leader changes). Because etcd is the persistence layer of the API, disk I/O and network quality are the primary determinants of control-plane responsiveness—hence B.
=========
In a cloud native environment, how do containerization and virtualization differ in terms of resource management?
Options:
Containerization uses hypervisors to manage resources, while virtualization does not.
Containerization shares the host OS, while virtualization runs a full OS for each instance.
Containerization consumes more memory than virtualization by default.
Containerization allocates resources per container, virtualization does not isolate them.
Answer:
BExplanation:
The fundamental difference between containerization and virtualization in a cloud native environment lies in how they manage and isolate resources, particularly with respect to the operating system. The correct description is that containerization shares the host operating system, while virtualization runs a full operating system for each instance, making option B the correct answer.
In virtualization, each virtual machine (VM) includes its own complete guest operating system running on top of a hypervisor. The hypervisor virtualizes hardware resources—CPU, memory, storage, and networking—and allocates them to each VM. Because every VM runs a full OS, virtualization introduces significant overhead in terms of memory usage, disk space, and startup time. However, it provides strong isolation between workloads, which is useful for running different operating systems or untrusted workloads on the same physical hardware.
In contrast, containerization operates at the operating system level rather than the hardware level. Containers share the host OS kernel and isolate applications using kernel features such as namespaces and control groups (cgroups). This design makes containers much lighter weight than virtual machines. Containers start faster, consume fewer resources, and allow higher workload density on the same infrastructure. Resource limits and isolation are still enforced, but without duplicating the entire operating system for each application instance.
Option A is incorrect because hypervisors are a core component of virtualization, not containerization. Option C is incorrect because containers generally consume less memory than virtual machines due to the absence of a full guest OS. Option D is incorrect because virtualization does isolate resources very strongly, while containers rely on OS-level isolation rather than hardware-level isolation.
In cloud native architectures, containerization is preferred for microservices and scalable workloads because of its efficiency and portability. Virtualization is still valuable for stronger isolation and heterogeneous operating systems. Therefore, Option B accurately captures the key resource management distinction between the two models.
Which are the two primary modes for Service discovery within a Kubernetes cluster?
Options:
Environment variables and DNS
API calls and LDAP
Labels and RADIUS
Selectors and DHCP
Answer:
AExplanation:
Kubernetes supports two primary built-in modes of Service discovery for workloads: environment variables and DNS, making A correct.
Environment variables: When a Pod is created, kubelet can inject environment variables for Services that exist in the same namespace at the time the Pod starts. These variables include the Service host and port (for example, MY_SERVICE_HOST and MY_SERVICE_PORT). This approach is simple but has limitations: values are captured at Pod creation time and don’t automatically update if Services change, and it can become cluttered in namespaces with many Services.
DNS-based discovery: This is the most common and flexible method. Kubernetes cluster DNS (usually CoreDNS) provides names like service-name.namespace.svc.cluster.local. Clients resolve the name and connect to the Service, which then routes to backend Pods. DNS scales better, is dynamic with endpoint updates, supports headless Services for per-Pod discovery, and is the default pattern for microservice communication.
The other options are not Kubernetes service discovery modes. Labels and selectors are used internally to relate Services to Pods, but they are not what application code uses for discovery (apps typically don’t query selectors; they call DNS names). LDAP and RADIUS are identity/authentication protocols, not service discovery. DHCP is for IP assignment on networks, not for Kubernetes Service discovery.
Operationally, DNS is central: many applications assume name-based connectivity. If CoreDNS is misconfigured or overloaded, service-to-service calls may fail even if Pods and Services are otherwise healthy. Environment-variable discovery can still work for some legacy apps, but modern cloud-native practice strongly prefers DNS (and sometimes service meshes on top of it). The key exam concept is: Kubernetes provides service discovery via env vars and DNS.
=========
Which of the following is a responsibility of the governance board of an open source project?
Options:
Decide about the marketing strategy of the project.
Review the pull requests in the main branch.
Outline the project's “terms of engagement”.
Define the license to be used in the project.
Answer:
CExplanation:
A governance board in an open source project typically defines how the community operates—its decision-making rules, roles, conflict resolution, and contribution expectations—so C (“Outline the project's terms of engagement”) is correct. In large cloud-native projects (Kubernetes being a prime example), clear governance is essential to coordinate many contributors, companies, and stakeholders. Governance establishes the “rules of the road” that keep collaboration productive and fair.
“Terms of engagement” commonly includes: how maintainers are selected, how proposals are reviewed (e.g., enhancement processes), how meetings and SIGs operate, what constitutes consensus, how voting works when consensus fails, and what code-of-conduct expectations apply. It also defines escalation and dispute resolution paths so technical disagreements don’t become community-breaking conflicts. In other words, governance is about ensuring the project has durable, transparent processes that outlive any individual contributor and support vendor-neutral decision making.
Option B (reviewing pull requests) is usually the responsibility of maintainers and SIG owners, not a governance board. The governance body may define the structure that empowers maintainers, but it generally does not do day-to-day code review. Option A (marketing strategy) is often handled by foundations, steering committees, or separate outreach groups, not governance boards as their primary responsibility. Option D (defining the license) is usually decided early and may be influenced by a foundation or legal process; while governance can shape legal/policy direction, the core governance responsibility is broader community operating rules rather than selecting a license.
In cloud-native ecosystems, strong governance supports sustainability: it encourages contributions, protects neutrality, and provides predictable processes for evolution. Therefore, the best verified answer is C.
=========
A platform engineer wants to ensure that a new microservice is automatically deployed to every cluster registered in Argo CD. Which configuration best achieves this goal?
Options:
Set up a Kubernetes CronJob that redeploys the microservice to all registered clusters on a schedule.
Manually configure every registered cluster with the deployment YAML for installing the microservice.
Create an Argo CD ApplicationSet that uses a Git repository containing the microservice manifests.
Use a Helm chart to package the microservice and manage it with a single Application defined in Argo CD.
Answer:
CExplanation:
Argo CD is a declarative GitOps continuous delivery tool designed to manage Kubernetes applications across one or many clusters. When the requirement is to automatically deploy a microservice to every cluster registered in Argo CD, the most appropriate and scalable solution is to use an ApplicationSet.
The ApplicationSet controller extends Argo CD by enabling the dynamic generation of multiple Argo CD Applications from a single template. One of its most powerful features is the cluster generator, which automatically discovers all clusters registered with Argo CD and creates an Application for each of them. By combining this generator with a Git repository containing the microservice manifests, the platform engineer ensures that the microservice is consistently deployed to all existing clusters—and any new clusters added in the future—without manual intervention.
This approach aligns perfectly with GitOps principles. The desired state of the microservice is defined once in Git, and Argo CD continuously reconciles that state across all target clusters. Any updates to the microservice manifests are automatically rolled out everywhere in a controlled and auditable manner. This provides strong guarantees around consistency, scalability, and operational simplicity.
Option A is incorrect because a CronJob introduces imperative redeployment logic and does not integrate with Argo CD’s reconciliation model. Option B is not scalable or maintainable, as it requires manual configuration for each cluster and increases the risk of configuration drift. Option D, while useful for packaging applications, still results in a single Application object and does not natively handle multi-cluster fan-out by itself.
Therefore, the correct and verified answer is Option C: creating an Argo CD ApplicationSet backed by a Git repository, which is the recommended and documented solution for multi-cluster application delivery in Argo CD.
What is a Dockerfile?
Options:
A bash script that is used to automatically build a docker image.
A config file that defines which image registry a container should be pushed to.
A text file that contains all the commands a user could call on the command line to assemble an image.
An image layer created by a running container stored on the host.
Answer:
CExplanation:
A Dockerfile is a text file that contains a sequence of instructions used to build a container image, so C is correct. These instructions include choosing a base image (FROM), copying files (COPY/ADD), installing dependencies (RUN), setting environment variables (ENV), defining working directories (WORKDIR), exposing ports (EXPOSE), and specifying the default startup command (CMD/ENTRYPOINT). When you run docker build (or compatible tools like BuildKit), the builder executes these instructions to produce an image composed of immutable layers.
In cloud-native application delivery, Dockerfiles (more generally, OCI image build definitions) are a key step in the supply chain. The resulting image artifact is what Kubernetes runs in Pods. Best practices include using minimal base images, pinning versions, avoiding embedding secrets, and using multi-stage builds to keep runtime images small. These practices improve security and performance, and make delivery pipelines more reliable.
Option A is incorrect because a Dockerfile is not a bash script, even though it can run shell commands through RUN. Option B is incorrect because registry destinations are handled by tooling and tagging/push commands (or CI pipeline configuration), not by the Dockerfile itself. Option D is incorrect because an image layer created by a running container is more closely related to container filesystem changes and commits; a Dockerfile is the build recipe, not a runtime-generated layer.
Although the question uses “Dockerfile,” the concept maps well to OCI-based container image creation generally: you define a reproducible build recipe that produces an immutable image artifact. That artifact is then versioned, scanned, signed, stored in a registry, and deployed to Kubernetes through manifests/Helm/GitOps. Therefore, C is the correct and verified definition.
=========
In a serverless computing architecture:
Options:
Users of the cloud provider are charged based on the number of requests to a function.
Serverless functions are incompatible with containerized functions.
Users should make a reservation to the cloud provider based on an estimation of usage.
Containers serving requests are running in the background in idle status.
Answer:
AExplanation:
Serverless architectures typically bill based on actual consumption, often measured as number of requests and execution duration (and sometimes memory/CPU allocated), so A is correct. The defining trait is that you don’t provision or manage servers directly; the platform scales execution up and down automatically, including down to zero for many models, and charges you for what you use.
Option B is incorrect: many serverless platforms can run container-based workloads (and some are explicitly “serverless containers”). The idea is the operational abstraction and billing model, not incompatibility with containers. Option C is incorrect because “making a reservation based on estimation” describes reserved capacity purchasing, which is the opposite of the typical serverless pay-per-use model. Option D is misleading: serverless systems aim to avoid charging for idle compute; while platforms may keep some warm capacity for latency reasons, the customer-facing model is not “containers running idle in the background.”
In cloud-native architecture, serverless is often chosen for spiky, event-driven workloads where you want minimal ops overhead and cost efficiency at low utilization. It pairs naturally with eventing systems (queues, pub/sub) and can be integrated with Kubernetes ecosystems via event-driven autoscaling frameworks or managed serverless offerings.
So the correct statement is A: charging is commonly based on requests (and usage), which captures the cost and operational model that differentiates serverless from always-on infrastructure.
=========
Which is the correct kubectl command to display logs in real time?
Options:
kubectl logs -p test-container-1
kubectl logs -c test-container-1
kubectl logs -l test-container-1
kubectl logs -f test-container-1
Answer:
DExplanation:
To stream logs in real time with kubectl, you use the follow option -f, so D is correct. In Kubernetes, kubectl logs retrieves logs from containers in a Pod. By default, it returns the current log output and exits. When you add -f, kubectl keeps the connection open and continuously prints new log lines as they are produced, similar to tail -f on Linux. This is especially useful for debugging live behavior, watching startup sequences, or monitoring an application during a rollout.
The other flags serve different purposes. -p (as seen in option A) requests logs from the previous instance of a container (useful after a restart/crash), not real-time streaming. -c (option B) selects a specific container within a multi-container Pod; it doesn’t stream by itself (though it can be combined with -f). -l (option C) is used with kubectl logs to select Pods by label, but again it is not the streaming flag; streaming requires -f.
In real troubleshooting, you commonly combine flags, e.g. kubectl logs -f pod-name -c container-name for streaming logs from a specific container, or kubectl logs -f -l app=myapp to stream from Pods matching a label selector (depending on kubectl behavior/version). But the key answer to “display logs in real time” is the follow flag: -f.
Therefore, the correct selection is D.
What does “continuous” mean in the context of CI/CD?
Options:
Frequent releases, manual processes, repeatable, fast processing
Periodic releases, manual processes, repeatable, automated processing
Frequent releases, automated processes, repeatable, fast processing
Periodic releases, automated processes, repeatable, automated processing
Answer:
CExplanation:
The correct answer is C: in CI/CD, “continuous” implies frequent releases, automation, repeatability, and fast feedback/processing. The intent is to reduce batch size and latency between code change and validation/deployment. Instead of integrating or releasing in large, risky chunks, teams integrate changes continually and rely on automation to validate and deliver them safely.
“Continuous” does not mean “periodic” (which eliminates B and D). It also does not mean “manual processes” (which eliminates A and B). Automation is core: build, test, security checks, and deployment steps are consistently executed by pipeline systems, producing reliable outcomes and auditability.
In practice, CI means every merge triggers automated builds and tests so the main branch stays in a healthy state. CD means those validated artifacts are promoted through environments with minimal manual steps, often including progressive delivery controls (canary, blue/green), automated rollbacks on health signal failures, and policy checks. Kubernetes works well with CI/CD because it is declarative and supports rollout primitives: Deployments, readiness probes, and rollback revision history enable safer continuous delivery when paired with pipeline automation.
Repeatability is a major part of “continuous.” The same pipeline should run the same way every time, producing consistent artifacts and deployments. This reduces “works on my machine” issues and shortens incident resolution because changes are traceable and reproducible. Fast processing and frequent releases also mean smaller diffs, easier debugging, and quicker customer value delivery.
So, the combination that accurately reflects “continuous” in CI/CD is frequent + automated + repeatable + fast, which is option C.
=========
Which of the following is the name of a container orchestration software?
Options:
OpenStack
Docker
Apache Mesos
CRI-O
Answer:
CExplanation:
C (Apache Mesos) is correct because Mesos is a cluster manager/orchestrator that can schedule and manage workloads (including containerized workloads) across a pool of machines. Historically, Mesos (often paired with frameworks like Marathon) was used to orchestrate services and batch jobs at scale, similar in spirit to Kubernetes’ scheduling and cluster management role.
Why the other answers are not correct as “container orchestration software” in this context:
OpenStack (A) is primarily an IaaS cloud platform for provisioning compute, networking, and storage (VM-focused). It’s not a container orchestrator, though it can host Kubernetes or containers.
Docker (B) is a container platform/tooling ecosystem (image build, runtime, local orchestration via Docker Compose/Swarm historically), but “Docker” itself is not the best match for “container orchestration software” in the multi-node cluster orchestration sense that the question implies.
CRI-O (D) is a container runtime implementing Kubernetes’ CRI; it runs containers on a node but does not orchestrate placement, scaling, or service lifecycle across a cluster.
Container orchestration typically means capabilities like scheduling, scaling, service discovery integration, health management, and rolling updates across multiple hosts. Mesos fits that definition: it provides resource management and scheduling over a cluster and can run container workloads via supported containerizers. Kubernetes ultimately became the dominant orchestrator for many use cases, but Mesos is clearly recognized as orchestration software in this category.
So, among these choices, the verified orchestration platform is Apache Mesos (C).
=========
Which of the following options include resources cleaned by the Kubernetes garbage collection mechanism?
Options:
Stale or expired CertificateSigningRequests (CSRs) and old deployments.
Nodes deleted by a cloud controller manager and obsolete logs from the kubelet.
Unused container and container images, and obsolete logs from the kubelet.
Terminated pods, completed jobs, and objects without owner references.
Answer:
DExplanation:
Kubernetes garbage collection (GC) is about cleaning up API objects and related resources that are no longer needed, so the correct answer is D. Two big categories it targets are (1) objects that have finished their lifecycle (like terminated Pods and completed Jobs, depending on controllers and TTL policies), and (2) “dangling” objects that are no longer referenced properly—often described as objects without owner references (or where owners are gone), which can happen when a higher-level controller is deleted or when dependent resources are left behind.
A key Kubernetes concept here is OwnerReferences: many resources are created “owned” by a controller (e.g., a ReplicaSet owned by a Deployment, Pods owned by a ReplicaSet). When an owning object is deleted, Kubernetes’ garbage collector can remove dependent objects based on deletion propagation policies (foreground/background/orphan). This prevents resource leaks and keeps the cluster tidy and performant.
The other options are incorrect because they refer to cleanup tasks outside Kubernetes GC’s scope. Kubelet logs (B/C) are node-level files and log rotation is handled by node/runtime configuration, not the Kubernetes garbage collector. Unused container images (C) are managed by the container runtime’s image GC and kubelet disk pressure management, not the Kubernetes API GC. Nodes deleted by a cloud controller (B) aren’t “garbage collected” in the same sense; node lifecycle is handled by controllers and cloud integrations, but not as a generic GC cleanup category like ownerRef-based object deletion.
So, when the question asks specifically about “resources cleaned by Kubernetes garbage collection,” it’s pointing to Kubernetes object lifecycle cleanup: terminated Pods, completed Jobs, and orphaned objects—exactly what option D states.
=========
What are the two steps performed by the kube-scheduler to select a node to schedule a pod?
Options:
Grouping and placing
Filtering and selecting
Filtering and scoring
Scoring and creating
Answer:
CExplanation:
The kube-scheduler selects a node in two main phases: filtering and scoring, so C is correct. First, filtering identifies which nodes are feasible for the Pod by applying hard constraints. These include resource availability (CPU/memory requests), node taints/tolerations, node selectors and required affinities, topology constraints, and other scheduling requirements. Nodes that cannot satisfy the Pod’s requirements are removed from consideration.
Second, scoring ranks the remaining feasible nodes using priority functions to choose the “best” placement. Scoring can consider factors like spreading Pods across nodes/zones, packing efficiency, affinity preferences, and other policies configured in the scheduler. The node with the highest score is selected (with tie-breaking), and the scheduler binds the Pod by setting spec.nodeName.
Option B (“filtering and selecting”) is close but misses the explicit scoring step that is central to scheduler design. The scheduler does “select” a node, but the canonical two-step wording in Kubernetes scheduling is filtering then scoring. Options A and D are not how scheduler internals are described.
Operationally, understanding filtering vs scoring helps troubleshoot scheduling failures. If a Pod can’t be scheduled, it failed in filtering—kubectl describe pod often shows “0/… nodes are available” reasons (insufficient CPU, taints, affinity mismatch). If it schedules but lands in unexpected places, it’s often about scoring preferences (affinity weights, topology spread preferences, default scheduler profiles).
So the verified correct answer is C: kube-scheduler uses Filtering and Scoring.
=========
Which statement about the Kubernetes network model is correct?
Options:
Pods can only communicate with Pods exposed via a Service.
Pods can communicate with all Pods without NAT.
The Pod IP is only visible inside a Pod.
The Service IP is used for the communication between Services.
Answer:
BExplanation:
Kubernetes’ networking model assumes that every Pod has its own IP address and that Pods can communicate with other Pods across nodes without requiring network address translation (NAT). That makes B correct. This is one of Kubernetes’ core design assumptions and is typically implemented via CNI plugins that provide flat, routable Pod networking (or equivalent behavior using encapsulation/routing).
This model matters because scheduling is dynamic. The scheduler can place Pods anywhere in the cluster, and applications should not need to know whether a peer is on the same node or a different node. With the Kubernetes network model, Pod-to-Pod communication works uniformly: a Pod can reach any other Pod IP directly, and nodes can reach Pods as well. Services and DNS add stable naming and load balancing, but direct Pod connectivity is part of the baseline model.
Option A is incorrect because Pods can communicate directly using Pod IPs even without Services (subject to NetworkPolicies and routing). Services are abstractions for stable access and load balancing; they are not the only way Pods can communicate. Option C is incorrect because Pod IPs are not limited to visibility “inside a Pod”; they are routable within the cluster network. Option D is misleading: Services are often used by Pods (clients) to reach a set of Pods (backends). “Service IP used for communication between Services” is not the fundamental model; Services are virtual IPs for reaching workloads, and “Service-to-Service communication” usually means one workload calling another via the target Service name.
A useful way to remember the official model: (1) all Pods can communicate with all other Pods (no NAT), (2) all nodes can communicate with all Pods (no NAT), (3) Pod IPs are unique cluster-wide. This enables consistent microservice connectivity and supports higher-level traffic management layers like Ingress and service meshes.
=========
Which of the following options is true about considerations for large Kubernetes clusters?
Options:
Kubernetes supports up to 1000 nodes and recommends no more than 1000 containers per node.
Kubernetes supports up to 5000 nodes and recommends no more than 500 Pods per node.
Kubernetes supports up to 5000 nodes and recommends no more than 110 Pods per node.
Kubernetes supports up to 50 nodes and recommends no more than 1000 containers per node.
Answer:
CExplanation:
The correct answer is C: Kubernetes scalability guidance commonly cites support up to 5000 nodes and recommends no more than 110 Pods per node. The “110 Pods per node” recommendation is a practical limit based on kubelet, networking, and IP addressing constraints, as well as performance characteristics for scheduling, service routing, and node-level resource management. It is also historically aligned with common CNI/IPAM defaults where node Pod CIDRs are sized for ~110 usable Pod IPs.
Why the other options are incorrect: A and D reference “containers per node,” which is not the standard sizing guidance (Kubernetes typically discusses Pods per node). B’s “500 Pods per node” is far above typical recommended limits for many environments and would stress IPAM, kubelet, and node resources significantly.
In large clusters, several considerations matter beyond the headline limits: API server and etcd performance, watch/list traffic, controller reconciliation load, CoreDNS scaling, and metrics/observability overhead. You must also plan for IP addressing (cluster CIDR sizing), node sizes (CPU/memory), and autoscaling behavior. On each node, kubelet and the container runtime must handle churn (starts/stops), logging, and volume operations. Networking implementations (kube-proxy, eBPF dataplanes) also have scaling characteristics.
Kubernetes provides patterns to keep systems stable at scale: request/limit discipline, Pod disruption budgets, topology spread constraints, namespaces and quotas, and careful observability sampling. But the exam-style fact this question targets is the published scalability figure and per-node Pod recommendation.
Therefore, the verified true statement among the options is C.
=========
How long should a stable API element in Kubernetes be supported (at minimum) after deprecation?
Options:
9 months
24 months
12 months
6 months
Answer:
CExplanation:
Kubernetes has a formal API deprecation policy to balance stability for users with the ability to evolve the platform. For a stable (GA) API element, Kubernetes commits to supporting that API for a minimum period after it is deprecated. The correct minimum in this question is 12 months, which corresponds to option C.
In practice, Kubernetes releases occur roughly every three to four months, and the deprecation policy is commonly communicated in terms of “releases” as well as time. A GA API that is deprecated in one release is typically kept available for multiple subsequent releases, giving cluster operators and application teams time to migrate manifests, client libraries, controllers, and automation. This matters because Kubernetes is often at the center of production delivery pipelines; abrupt API removals would break deployments, upgrades, and tooling. By guaranteeing a minimum support window, Kubernetes enables predictable upgrades and safer lifecycle management.
This policy also encourages teams to track API versions and plan migrations. For example, workloads might start on a beta API (which can change), but once an API reaches stable, users can expect a stronger compatibility promise. Deprecation warnings help surface risk early. In many clusters, you’ll see API server warnings and tooling hints when manifests use deprecated fields/versions, allowing proactive remediation before the removal release.
Options 6 or 9 months would be too short for many enterprises to coordinate changes across multiple teams and environments. 24 months may be true for some ecosystems, but the Kubernetes stated minimum in this exam-style framing is 12 months. The key operational takeaway is: don’t ignore deprecation notices—they’re your clock for migration planning. Treat API version upgrades as part of routine cluster lifecycle hygiene to avoid being blocked during Kubernetes version upgrades when deprecated APIs are finally removed.
=========
In Kubernetes, which command is the most efficient way to check the progress of a Deployment rollout and confirm if it has completed successfully?
Options:
kubectl get deployments --show-labels -o wide
kubectl describe deployment my-deployment --namespace=default
kubectl logs deployment/my-deployment --all-containers=true
kubectl rollout status deployment/my-deployment
Answer:
DExplanation:
When performing rolling updates in Kubernetes, it is important to have a clear and efficient way to track the progress of a Deployment rollout and determine whether it has completed successfully. The most direct and purpose-built command for this task is kubectl rollout status deployment/my-deployment, making option D the correct answer.
The kubectl rollout status command is specifically designed to monitor the state of rollouts for resources such as Deployments, StatefulSets, and DaemonSets. It provides real-time feedback on the rollout process, including whether new Pods have been created, old Pods are being terminated, and if the desired number of updated replicas has become available. The command blocks until the rollout either completes successfully or fails, which makes it especially useful in automation and CI/CD pipelines.
Option A is incorrect because kubectl get deployments only provides a snapshot view of deployment status fields and does not actively track rollout progress. Option B can provide detailed information and events, but it is verbose and not optimized for quickly confirming rollout completion. Option C is incorrect because Deployment objects themselves do not produce logs; logs are generated by Pods and containers, not higher-level workload resources.
The rollout status command also integrates with Kubernetes’ revision history, ensuring that it accurately reflects the current state of the Deployment’s update strategy. If a rollout is stuck due to failed Pods, readiness probe failures, or resource constraints, the command will indicate that the rollout is not progressing, helping operators quickly identify issues.
In summary, kubectl rollout status deployment/my-deployment is the most efficient and reliable way to check rollout progress and confirm success. It is purpose-built for rollout tracking, easy to interpret, and widely used in production Kubernetes workflows, making Option D the correct and verified answer.
Which of the following is a lightweight tool that manages traffic flows between services, enforces access policies, and aggregates telemetry data, all without requiring changes to application code?
Options:
NetworkPolicy
Linkerd
kube-proxy
Nginx
Answer:
BExplanation:
Linkerd is a lightweight service mesh that manages service-to-service traffic, security policies, and telemetry without requiring application code changes—so B is correct. A service mesh introduces a dedicated layer for east-west traffic (internal service calls) and typically provides features like mutual TLS (mTLS), retries/timeouts, traffic shaping, and consistent metrics/tracing signals. Linkerd is known for being simpler and resource-efficient relative to some alternatives, which aligns with the “lightweight tool” phrasing.
Why this matches the description: In a service mesh, workload traffic is intercepted by a proxy layer (often as a sidecar or node-level/ambient proxy) and managed centrally by mesh control components. This allows security and traffic policy to be applied uniformly without modifying each microservice. Telemetry is also generated consistently because the proxies observe traffic directly and emit metrics and traces about request rates, latency, and errors.
The other choices don’t fit. NetworkPolicy is a Kubernetes resource that controls allowed network flows (L3/L4) but does not provide L7 traffic management, retries, identity-based mTLS, or automatic telemetry aggregation. kube-proxy implements Service networking rules (ClusterIP/NodePort forwarding) but does not enforce access policies at the service identity level and is not a telemetry system. Nginx can be used as an ingress controller or reverse proxy, but it is not inherently a full service mesh spanning all service-to-service communication and policy/telemetry across the mesh by default.
In cloud native architecture, service meshes help address cross-cutting concerns—security, observability, and traffic management—without embedding that logic into every application. The question’s combination of “traffic flows,” “access policies,” and “aggregates telemetry” maps directly to a mesh, and the lightweight mesh option provided is Linkerd.
=========
What feature must a CNI support to control specific traffic flows for workloads running in Kubernetes?
Options:
Border Gateway Protocol
IP Address Management
Pod Security Policy
Network Policies
Answer:
DExplanation:
To control which workloads can communicate with which other workloads in Kubernetes, you use NetworkPolicy resources—but enforcement depends on the cluster’s networking implementation. Therefore, for traffic-flow control, the CNI/plugin must support Network Policies, making D correct.
Kubernetes defines the NetworkPolicy API as a declarative way to specify allowed ingress and egress traffic based on selectors (Pod labels, namespaces, IP blocks) and ports/protocols. However, Kubernetes itself does not enforce NetworkPolicy rules; enforcement is provided by the network plugin (or associated dataplane components). If your CNI does not implement NetworkPolicy, the objects may exist in the API but have no effect—Pods will communicate freely by default.
Option B (IP Address Management) is often part of CNI responsibilities, but IPAM is about assigning addresses, not enforcing L3/L4 security policy. Option A (BGP) is used by some CNIs to advertise routes (for example, in certain Calico deployments), but BGP is not the general requirement for policy enforcement. Option C (Pod Security Policy) is a deprecated/removed Kubernetes admission feature related to Pod security settings, not network flow control.
From a Kubernetes security standpoint, NetworkPolicies are a key tool for implementing least privilege at the network layer—limiting lateral movement, reducing blast radius, and segmenting environments. But they only work when the chosen CNI supports them. Thus, the correct answer is D: Network Policies.
=========
The IPv4/IPv6 dual stack in Kubernetes:
Options:
Translates an IPv4 request from a Service to an IPv6 Service.
Allows you to access the IPv4 address by using the IPv6 address.
Requires NetworkPolicies to prevent Services from mixing requests.
Allows you to create IPv4 and IPv6 dual stack Services.
Answer:
DExplanation:
The correct answer is D: Kubernetes dual-stack support allows you to create Services (and Pods, depending on configuration) that use both IPv4 and IPv6 addressing. Dual-stack means the cluster is configured to allocate and route traffic for both IP families. For Services, this can mean assigning both an IPv4 ClusterIP and an IPv6 ClusterIP so clients can connect using either family, depending on their network stack and DNS resolution.
Option A is incorrect because dual-stack is not about protocol translation (that would be NAT64/other gateway mechanisms, not the core Kubernetes dual-stack feature). Option B is also a form of translation/aliasing that isn’t what Kubernetes dual-stack implies; having both addresses available is different from “access IPv4 via IPv6.” Option C is incorrect: dual-stack does not inherently require NetworkPolicies to “prevent mixing requests.” NetworkPolicies are about traffic control, not IP family separation.
In Kubernetes, dual-stack requires support across components: the network plugin (CNI) must support IPv4/IPv6, the cluster must be configured with both Pod CIDRs and Service CIDRs, and DNS should return appropriate A and AAAA records for Service names. Once configured, you can specify preferences such as ipFamilyPolicy (e.g., PreferDualStack) and ipFamilies (IPv4, IPv6 order) for Services to influence allocation behavior.
Operationally, dual-stack is useful for environments transitioning to IPv6, supporting IPv6-only clients, or running in mixed networks. But it adds complexity: address planning, firewalling, and troubleshooting need to consider two IP families. Still, the definition in the question is straightforward: Kubernetes dual-stack enables dual-stack Services, which is option D.
=========
What function does kube-proxy provide to a cluster?
Options:
Implementing the Ingress resource type for application traffic.
Forwarding data to the correct endpoints for Services.
Managing data egress from the cluster nodes to the network.
Managing access to the Kubernetes API.
Answer:
BExplanation:
kube-proxy is a node-level networking component that helps implement the Kubernetes Service abstraction. Services provide a stable virtual IP and DNS name that route traffic to a set of Pods (endpoints). kube-proxy watches the API for Service and EndpointSlice/Endpoints changes and then programs the node’s networking rules so that traffic sent to a Service is forwarded (load-balanced) to one of the correct backend Pod IPs. This is why B is correct.
Conceptually, kube-proxy turns the declarative Service configuration into concrete dataplane behavior. Depending on the mode, it may use iptables rules, IPVS, or integrate with eBPF-capable networking stacks (sometimes kube-proxy is replaced or bypassed by CNI implementations, but the classic kube-proxy role remains the canonical answer). In iptables mode, kube-proxy creates NAT rules that rewrite traffic from the Service virtual IP to one of the Pod endpoints. In IPVS mode, it programs kernel load-balancing tables for more scalable service routing. In all cases, the job is to connect “Service IP/port” to “Pod IP/port endpoints.”
Option A is incorrect because Ingress is a separate API resource and requires an Ingress Controller (like NGINX Ingress, HAProxy, Traefik, etc.) to implement HTTP routing, TLS termination, and host/path rules. kube-proxy is not an Ingress controller. Option C is incorrect because general node egress management is not kube-proxy’s responsibility; egress behavior typically depends on the CNI plugin, NAT configuration, and network policies. Option D is incorrect because API access control is handled by the API server’s authentication/authorization layers (RBAC, webhooks, etc.), not kube-proxy.
So kube-proxy’s essential function is: keep node networking rules in sync so that Service traffic reaches the right Pods. It is one of the key components that makes Services “just work” across nodes without clients needing to know individual Pod IPs.
=========
Which API object is the recommended way to run a scalable, stateless application on your cluster?
Options:
ReplicaSet
Deployment
DaemonSet
Pod
Answer:
BExplanation:
For a scalable, stateless application, Kubernetes recommends using a Deployment because it provides a higher-level, declarative management layer over Pods. A Deployment doesn’t just “run replicas”; it manages the entire lifecycle of rolling out new versions, scaling up/down, and recovering from failures by continuously reconciling the current cluster state to the desired state you define. Under the hood, a Deployment typically creates and manages a ReplicaSet, and that ReplicaSet ensures a specified number of Pod replicas are running at all times. This layering is the key: you get ReplicaSet’s self-healing replica maintenance plus Deployment’s rollout/rollback strategies and revision history.
Why not the other options? A Pod is the smallest deployable unit, but it’s not a scalable controller—if a Pod dies, nothing automatically replaces it unless a controller owns it. A ReplicaSet can maintain N replicas, but it does not provide the full rollout orchestration (rolling updates, pause/resume, rollbacks, and revision tracking) that you typically want for stateless apps that ship frequent releases. A DaemonSet is for node-scoped workloads (one Pod per node or subset of nodes), like log shippers or node agents, not for “scale by replicas.”
For stateless applications, the Deployment model is especially appropriate because individual replicas are interchangeable; the application does not require stable network identities or persistent storage per replica. Kubernetes can freely replace or reschedule Pods to maintain availability. Deployment strategies (like RollingUpdate) allow you to upgrade without downtime by gradually replacing old replicas with new ones while keeping the Service endpoints healthy. That combination—declarative desired state, self-healing, and controlled updates—makes Deployment the recommended object for scalable stateless workloads.
=========
What framework does Kubernetes use to authenticate users with JSON Web Tokens?
Options:
OpenID Connect
OpenID Container
OpenID Cluster
OpenID CNCF
Answer:
AExplanation:
Kubernetes commonly authenticates users using OpenID Connect (OIDC) when JSON Web Tokens (JWTs) are involved, so A is correct. OIDC is an identity layer on top of OAuth 2.0 that standardizes how clients obtain identity information and how JWTs are issued and validated.
In Kubernetes, authentication happens at the API server. When OIDC is configured, the API server validates incoming bearer tokens (JWTs) by checking token signature and claims against the configured OIDC issuer and client settings. Kubernetes can use OIDC claims (such as sub, email, groups) to map the authenticated identity to Kubernetes RBAC subjects. This is how enterprises integrate clusters with identity providers such as Okta, Dex, Azure AD, or other OIDC-compliant IdPs.
Options B, C, and D are fabricated phrases and not real frameworks. Kubernetes documentation explicitly references OIDC as a supported method for token-based user authentication (alongside client certificates, bearer tokens, static token files, and webhook authentication). The key point is that Kubernetes does not “invent” JWT auth; it integrates with standard identity providers through OIDC so clusters can participate in centralized SSO and group-based authorization.
Operationally, OIDC authentication is typically paired with:
RBAC for authorization (“what you can do”)
Audit logging for traceability
Short-lived tokens and rotation practices for security
Group claim mapping to simplify permission management
So, the verified framework Kubernetes uses with JWTs for user authentication is OpenID Connect.
=========
At which layer would distributed tracing be implemented in a cloud native deployment?
Options:
Network
Application
Database
Infrastructure
Answer:
BExplanation:
Distributed tracing is implemented primarily at the application layer, so B is correct. The reason is simple: tracing is about capturing the end-to-end path of a request as it traverses services, libraries, queues, and databases. That “request context” (trace ID, span ID, baggage) must be created, propagated, and enriched as code executes. While infrastructure components (proxies, gateways, service meshes) can generate or augment trace spans, the fundamental unit of tracing is still tied to application operations (an HTTP handler, a gRPC call, a database query, a cache lookup).
In Kubernetes-based microservices, distributed tracing typically uses standards like OpenTelemetry for instrumentation and context propagation. Application frameworks emit spans for key operations, attach attributes (route, status code, tenant, retry count), and propagate context via headers (e.g., W3C Trace Context). This is what lets you reconstruct “Service A → Service B → Service C” for one user request and identify the slow or failing hop.
Why other layers are not the best answer:
Network focuses on packets/flows, but tracing is not a packet-capture problem; it’s a causal request-path problem across services.
Database spans are part of traces, but tracing is not “implemented in the database layer” overall; DB spans are one component.
Infrastructure provides the platform and can observe traffic, but without application context it can’t fully represent business operations (and many useful attributes live in app code).
So the correct layer for “where tracing is implemented” is the application layer—even when a mesh or proxy helps, it’s still describing application request execution across components.
=========