Pure Certified Portworx Enterprise Professional (PEP) Exam Questions and Answers
A Portworx administrator wants to create a storage class that can be used to create volumes with the following characteristics:
• Encrypted volume
• Two replicas
Which definition should the administrator use?
Options:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-encrypted
provisioner: kubernetes.io/portworx-volume
parameters:
encrypted: "true"
repl: "2"
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-encrypted
provisioner: kubernetes.io/portworx-volume
parameters:
sharedv4: "true"
repl: "2"
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-encrypted
provisioner: kubernetes.io/portworx-volume
parameters:
secure: "true"
repl: "2"
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
To create a StorageClass in Kubernetes for Portworx volumes that are encrypted and replicated twice, the correct parameters are encrypted: "true" to enable encryption and repl: "2" to specify two replicas. Option A accurately sets these parameters, ensuring volumes provisioned with this StorageClass will be encrypted at rest and maintain two replicas for data redundancy. Option B uses sharedv4: "true", which relates to NFS-like sharing, not encryption. Option C uses secure: "true", which is not the recognized parameter for enabling encryption in Portworx StorageClass definitions. The official Portworx StorageClass parameter documentation confirms encrypted as the correct flag for encryption and repl to specify replication factor, enabling administrators to enforce data security and availability policies declaratively through Kubernetes manifests【Pure Storage Portworx StorageClass Guide†source】.
Which platform is supported by Portworx for deployment?
Options:
Docker Swarm
DCOS
AWS
Answer:
CExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx primarily supports deployment on Kubernetes and is well-integrated with major cloud platforms including Amazon Web Services (AWS). AWS offers native infrastructure and storage services that complement Portworx’s capabilities for cloud-native storage, including integration with Elastic Block Store (EBS) and S3 Object Storage. While Portworx historically supported container orchestrators like Docker Swarm and Mesosphere DC/OS (DCOS), the primary and recommended platform for production deployments today is Kubernetes on cloud providers such as AWS, Azure, and Google Cloud. AWS’s ecosystem allows Portworx to leverage scalable compute and storage infrastructure, advanced networking, and cloud security features, making it a preferred platform. Portworx official platform support documentation lists AWS as a key supported environment for its container storage solutions【Pure Storage Portworx Platform Support Guide†source】.
An infrastructure admin wants to restrict installing Portworx in two nodes.
What label does the node need to have?
Options:
px/enabled=false
px/service=stop
px/storage-node=false
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Restricting Portworx installation on certain Kubernetes nodes is achieved by labeling those nodes with px/enabled=false. This label signals the Portworx Operator or installer to exclude these nodes from Portworx deployment. This allows admins to reserve nodes for other workloads or prevent Portworx from running on unsupported hardware. The label px/service=stop or px/storage-node=false are not recognized controls in the Portworx installation process. Portworx deployment guides consistently document the use of px/enabled=false for node exclusion, providing a simple, declarative way to control cluster topology and resource assignment during Portworx installations and upgrades【Pure Storage Portworx Deployment Guide†source】.
What is the primary function of the Portworx OCI monitor pod in a Kubernetes environment?
Options:
To facilitate the installation of Portworx
To monitor the health of Kubernetes nodes
To manage Kubernetes network policies
Answer:
BExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
The Portworx OCI monitor pod primarily monitors the health of Kubernetes nodes within the cluster. It collects telemetry data and status updates about node health, resource availability, and connectivity to ensure the Kubernetes environment hosting Portworx pods remains stable and reliable. This monitoring is vital to detect node failures, performance degradation, or resource bottlenecks early, enabling prompt remedial action. The OCI monitor acts as a specialized component interacting with the Kubernetes control plane and Portworx services to provide real-time node health insights. This role is distinct from installation facilitation or network policy management, focusing instead on operational observability. Official Portworx operator and observability documentation describe the OCI monitor’s function as critical for node health monitoring and overall cluster reliability within Kubernetes environments running Portworx storage【Pure Storage Portworx Observability Docs†source】.
What is the primary command used to back up a volume in Portworx?
Options:
pxctl volume snapshot create
pxctl volume save
pxctl backup volume
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
The primary command to back up a volume in Portworx is pxctl volume snapshot create. This command creates a point-in-time snapshot of the specified volume, capturing its state for backup or recovery purposes. Snapshots can be local or uploaded to cloud object stores as part of disaster recovery strategies. The snapshot operation is efficient and minimally intrusive, using -on-write mechanisms to avoid full data duplication. Although other commands like pxctl volume save or pxctl backup volume might exist in other storage systems, Portworx explicitly uses pxctl volume snapshot create as its core volume backup command. The Portworx CLI documentation details this command as fundamental for data protection and snapshot lifecycle management in the cluster【Pure Storage Portworx CLI Guide†source】.
What is the minimum Stork version required to perform an Application Backup?
Options:
Any Stork version works
Stork 23.3.0
Stork 2.3
Answer:
CExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Stork version 2.3 is the minimum version required to support Application Backup features in Portworx. Application Backup allows for consistent snapshots and restores of complex, multi-volume, and multi-pod stateful applications. This capability depends on enhancements introduced in Stork 2.3 that enable application-aware backup orchestration, coordination between Kubernetes and storage layers, and integration with backup policies. Earlier Stork versions lack these features, making them unsuitable for application-level backups. Portworx release notes and Stork documentation confirm that version 2.3 introduced key functionalities that underpin the reliable backup and restore workflows for stateful workloads, making it a baseline requirement for disaster recovery and business continuity implementations involving application backups【Pure Storage Portworx Backup Docs†source】.
How should a Portworx administrator enable the Alertmanager?
Options:
Create a config map with the Alertmanager configuration and enable Alertmanager via the pxctl CLI.
Create a secret with the Alertmanager configuration and enable Alertmanager in the StorageCluster object.
Deploy Alertmanager by following the official Alertmanager documentation and integrate it with Portworx by enabling monitoring webhook in the StorageCluster object.
Answer:
BExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Enabling Alertmanager in Portworx involves creating a Kubernetes Secret containing the Alertmanager configuration (such as alert routing rules and notification channels) and referencing this secret in the Portworx StorageCluster manifest. This integration allows Portworx’s monitoring stack to forward alerts to Alertmanager for centralized alert processing and notifications. Unlike ConfigMaps, which are generally used for non-sensitive data, Secrets protect sensitive alert configuration. Enabling Alertmanager via pxctl CLI is not supported as Portworx relies on Kubernetes declarative configuration for monitoring components. Additionally, deploying Alertmanager independently and integrating through webhooks requires manual setup but is not the recommended or integrated approach. Portworx official observability documentation details the secret-based configuration as the standard and secure method to enable and manage Alertmanager within Portworx clusters for robust alert handling【Pure Storage Portworx Monitoring Guide†source】.
What command should an administrator run to verify a Portworx upgrade on Kubernetes?
Options:
pxctl get storagenodes
kubectl get storagenodes
kubectl get nodes -o wide
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
To verify a Portworx upgrade on Kubernetes, administrators use the pxctl get storagenodes command. This Portworx CLI command lists all storage nodes with detailed information including version, status, and health. By inspecting the version column, administrators can confirm whether all nodes have been successfully upgraded to the desired Portworx release. This command specifically queries Portworx daemons for accurate cluster version details, unlike kubectl get nodes which shows Kubernetes node info but not Portworx versioning. Portworx upgrade best practices stress using pxctl commands for detailed verification after an upgrade to ensure consistent cluster software versions and successful upgrade completion【Pure Storage Portworx Upgrade Guide†source】.
Which two CRDs are required for performing an ApplicationBackup?
Options:
ApplicationBackup and migrations
BackupLocation and RestoreBackup
BackupLocation and ApplicationBackup
Answer:
CExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
To perform an ApplicationBackup in Portworx, two Kubernetes Custom Resource Definitions (CRDs) are essential: BackupLocation and ApplicationBackup. The BackupLocation CRD defines the target backup storage, such as an S3 bucket or NFS share, including credentials and endpoints. ApplicationBackup defines the specifics of the backup operation, including which application volumes to back up, schedules, and retention policies. Together, they enable declarative backup management within Kubernetes, allowing administrators to configure, automate, and monitor backups of stateful applications using Portworx. These CRDs provide flexibility and integration with Kubernetes-native tools, improving disaster recovery capabilities. Portworx backup documentation describes these CRDs as the foundation of its application-aware backup and restore system【Pure Storage Portworx Backup Docs†source】.
Which command shows a summary of the Portworx cluster status?
Options:
helm list --px
pxctl cluster status
kubectl get pxstatus
Answer:
BExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
The command pxctl cluster status provides a concise summary of the Portworx cluster’s health and operational status. This includes node states, storage pool information, volume statuses, and quorum information. It is the primary CLI command for administrators to quickly assess cluster health and detect any issues affecting storage availability or performance. helm list --px is a Helm package management command unrelated to cluster status, and kubectl get pxstatus is not a valid Kubernetes or Portworx command. Portworx documentation recommends pxctl cluster status as an essential monitoring command during routine operations and troubleshooting to ensure the cluster is functioning properly and that all nodes are communicating and healthy【Pure Storage Portworx CLI Guide†source】
An application team is preparing to deploy an ElasticSearch application and wants all Portworx volumes created in 6 specific Kubernetes nodes.
Which Portworx feature should they use to achieve this?
Options:
Stork
Autopilot
Volume placement strategy
Answer:
CExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
To ensure Portworx volumes for an ElasticSearch application are created only on specific Kubernetes nodes, the Volume Placement Strategy feature is used. This feature allows administrators to define node affinity or anti-affinity rules that restrict volume provisioning to a subset of nodes. By tagging the six nodes with appropriate labels and configuring the StorageClass or volume parameters to respect these labels, Portworx guarantees that volumes will only be provisioned on those nodes. This targeted volume placement is critical for performance optimization, data locality, and compliance with infrastructure constraints. Autopilot automates scaling and Stork manages storage-aware scheduling but does not directly control volume node placement. The Portworx deployment documentation highlights Volume Placement Strategy as the tool for precise volume-to-node mapping in Kubernetes clusters【Pure Storage Portworx Deployment Guide†source】.
What is a built-in role in Portworx’s RBAC model?
Options:
system.admin
storage.manager
storage.admin
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx implements Role-Based Access Control (RBAC) to secure management operations within the cluster. One of the key built-in roles is system.admin, which has full administrative privileges across Portworx resources. This role allows users to manage storage nodes, volumes, snapshots, backups, and cluster-wide settings. The system.admin role is typically assigned to trusted cluster operators or administrators responsible for cluster maintenance and configuration. Other roles like storage.manager or storage.admin are not standard built-in roles in Portworx RBAC but may be custom roles defined in some environments. The official Portworx security and RBAC documentation details system.admin as the comprehensive administrative role with full cluster management capabilities, critical for secure operations and delegation of responsibilities【Pure Storage Portworx Security Guide†source】.
What solution should a Portworx administrator use to store snapshots of a critical application volume in an Object Store?
Options:
Cloud Snapshot
Backups
Local Snapshot
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Cloud Snapshots are designed to store snapshots of critical application volumes directly into an external Object Store such as Amazon S3 or other S3-compatible storage. This solution provides offsite durability, disaster recovery capability, and long-term retention beyond the cluster’s local storage capacity. Cloud Snapshots allow administrators to create consistent, incremental snapshots that are efficiently uploaded to cloud storage, enabling protection against data loss scenarios such as cluster failure or site outages. This contrasts with Local Snapshots, which remain on the cluster’s local storage, and Backups, which may refer to full data copies. The Portworx documentation explains Cloud Snapshots as the recommended approach for storing critical volume snapshots securely and durably offsite, supporting business continuity strategies【Pure Storage Portworx Cloud Snapshot Guide†source】.
What command should the administrator run if Portworx logs report "Node is not in quorum"?
Options:
The administrator should do nothing.
The administrator should check output of pxctl status on each storage node.
The administrator should run pxctl service status.
Answer:
BExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
If Portworx logs indicate that a node is not in quorum, the administrator’s first step is to verify the status of each storage node in the cluster using the command pxctl status. This command provides detailed information about node connectivity, quorum status, and cluster health. The quorum is critical for distributed consensus and cluster consistency. Checking each node’s status helps identify network partitions, node failures, or communication issues causing quorum loss. Simply running pxctl service status provides service-level info but not the comprehensive node quorum details needed. The Portworx troubleshooting documentation stresses using pxctl status as the primary diagnostic tool when encountering quorum-related alerts to ensure cluster stability and resolve issues promptly【Pure Storage Portworx Troubleshooting Guide†source】.
Which flag in the Portworx StorageCluster spec enables telemetry?
Options:
spec.autopilot.enabled
spec.telemetry.enabled
spec.csi.enabled
Answer:
BExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Telemetry in Portworx refers to the automated collection and reporting of cluster performance and health metrics to Pure1 or other monitoring services. To enable telemetry, the spec.telemetry.enabled flag must be set to true in the StorageCluster Custom Resource Definition (CRD). This setting activates the telemetry pod on each node, which collects data such as resource usage, storage capacity, and errors, then securely uploads it to Pure Storage’s management platform. Enabling telemetry helps administrators gain insights into cluster performance trends, preemptively identify issues, and optimize resource utilization. The Portworx operator respects this flag during installation and upgrades to ensure telemetry is consistently configured. Neither spec.autopilot.enabled (which controls the Autopilot feature) nor spec.csi.enabled (which controls CSI driver deployment) affects telemetry settings. Official Portworx documentation highlights this flag as critical for activating health monitoring and analytics features within Portworx clusters【Pure Storage Portworx Telemetry Guide†source】.
What is the primary benefit of using Dynamic Provisioning and Storage Classes in Portworx?
Options:
They limit the customization of volume parameters to default settings only.
They require manual creation of Portworx volumes before they can be used.
They enable the automatic provisioning of Portworx volumes without manual intervention.
Answer:
CExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Dynamic Provisioning in Kubernetes with Portworx StorageClasses enables automatic, on-demand creation of storage volumes as requested by applications through Persistent Volume Claims (PVCs). This eliminates the need for administrators to manually create and manage volumes, significantly improving operational efficiency and accelerating application deployment. StorageClasses encapsulate parameters such as replication, encryption, and IO profiles, ensuring consistent volume configuration. Dynamic Provisioning also supports scaling and workload agility by provisioning storage transparently based on application needs. This feature is central to cloud-native storage management and is well documented in both Kubernetes and Portworx installation guides. It contrasts with manual volume creation, which is labor-intensive and error-prone, thus dynamic provisioning enhances automation and simplifies storage lifecycle management【Pure Storage Portworx Kubernetes Guide†source】.
What is a local snapshot in the context of Portworx?
Options:
A snapshot that is stored in a remote data center.
A snapshot that is automatically backed up to the cloud.
A snapshot that is stored on the same cluster as the original volume.
Answer:
CExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
A local snapshot in Portworx refers to a point-in-time of a volume’s data that is stored within the same storage cluster as the original volume. Local snapshots use efficient -on-write techniques to minimize storage overhead while preserving the volume state for backup, recovery, or rollback operations. Unlike cloud or remote snapshots, local snapshots do not require network transfer or object storage integration, enabling fast snapshot creation and restoration with low latency. They are ideal for short-term data protection, testing, or recovery scenarios where immediate access to snapshots is required. Portworx’s snapshot documentation describes local snapshots as the foundational snapshot type, essential for operational backups and data consistency within Kubernetes clusters using Portworx storage【Pure Storage Portworx Snapshot Guide†source】.
A Portworx administrator wants to control which nodes will host a KVDB installation.
What steps must an administrator take to ensure that KVDB installs on NODE01, NODE03, and NODE05?
Options:
It is not possible to configure the location of the KVDB prior to installation.
Change the following in the 'StorageCluster' spec prior to installation:
spec:
kvdb:
selector:
matchNodeName:
- NODE01
- NODE03
- NODE05
Label NODE01, NODE03, and NODE05 with 'px1/metadata-node=true' prior to installation.
Answer:
BExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Portworx provides a mechanism to control KVDB pod placement through the kvdb.selector.matchNodeName field in the StorageCluster Custom Resource Definition (CRD). This allows administrators to explicitly specify node names where KVDB pods will be deployed. By setting this selector to include NODE01, NODE03, and NODE05, KVDB pods will run exclusively on these nodes, ensuring better control of quorum, fault tolerance, and performance. Node labeling alone is insufficient unless the labels are properly referenced in the spec, making direct node name matching the most straightforward and reliable method. This configuration must be done prior to cluster installation to ensure proper pod placement. Official Portworx documentation on cluster deployment and KVDB configuration confirms this method as the recommended best practice for managing KVDB nodes, critical for maintaining database availability and consistency within the Portworx cluster【Pure Storage Portworx Install Guide†source】.
What step is necessary to start using encrypted PVCs in Portworx?
Options:
Select secret provider.
StorageClass needs the following parameter: secure: enabled.
Configure IO profiles.
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Using encrypted Persistent Volume Claims (PVCs) with Portworx requires that an administrator first configure a secret provider responsible for managing the encryption keys. The secret provider could be an external Key Management System (KMS) such as AWS KMS, Google Cloud KMS, Hashicorp Vault, or Kubernetes Secrets. This step is critical because encryption keys are essential to securely encrypt and decrypt data on volumes. Although enabling encryption in the StorageClass via parameters like secure: enabled is necessary to activate encryption on volumes, it is insufficient without a properly configured secret provider to manage the keys. The secret provider ensures keys are securely stored, rotated, and accessed, fulfilling compliance and security requirements. Portworx documentation stresses this as a foundational step to enable encrypted PVCs, highlighting that without a configured secret provider, encrypted volumes cannot be provisioned or used effectively【Pure Storage Portworx Encryption Docs†source】.
What Kubernetes resource allows visibility of the Parent Volume and the snapshot ID?
Options:
VolumeSnapshot
PersistentVolumeClaim
VolumeSnapshotData
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
The VolumeSnapshot Kubernetes resource provides metadata about snapshots of Persistent Volumes, including references to the parent volume and snapshot IDs. It represents a snapshot request and maintains information linking it to the source PVC and the actual snapshot data. This resource enables Kubernetes-native management of volume snapshots, allowing users to create, delete, and list snapshots declaratively. Portworx integrates with Kubernetes snapshot APIs and populates VolumeSnapshot resources with detailed information necessary for managing snapshot lifecycle and restoring data. The Kubernetes and Portworx documentation highlight VolumeSnapshot as the primary interface to monitor and interact with snapshot metadata, crucial for backup, restore, and disaster recovery workflows in containerized environments【Pure Storage Portworx Snapshot API Guide†source】.
What information is included in the Portworx diagnostics bundle (diags)?
Options:
Portworx journal logs, CLI command outputs, and basic OS information
User activity logs, security policies, and firewall rules
Application logs, Kubernetes events, and network configurations
Answer:
AExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
The Portworx diagnostics bundle, known as “diags,” aggregates comprehensive diagnostic data for troubleshooting. This includes Portworx journal logs, which record detailed system and service events essential for identifying errors or malfunctions. Additionally, the bundle contains outputs from key CLI commands such as pxctl status and pxctl volume list that provide snapshots of the cluster’s health, volume states, and configuration at the time of collection. Basic operating system information, including kernel version, disk hardware details, and network interfaces, is also captured to understand the underlying environment. Together, these components equip Portworx support and administrators with the contextual data needed for effective root cause analysis and faster issue resolution. The official Portworx support documentation recommends collecting and submitting this bundle for all significant troubleshooting cases as it expedites problem diagnosis and resolution【Pure Storage Portworx Support Guide†source】.
What is the correct procedure to upgrade a Portworx cluster from version 3.0 to 3.1 using the Portworx Operator?
Options:
Execute the 'pxctl cluster upgrade —version 3.1' command.
Edit the StorageCluster CR and update the .spec.image parameter from portworx/oci-monitor:3.0 to portworx/oci-monitor:3.1.
No manual upgrade is needed as Portworx will automatically upgrade to the latest version.
Answer:
BExplanation:
Comprehensive and Detailed Explanation From Exact Extract:
Upgrading Portworx clusters managed by the Kubernetes Operator requires a declarative update to the StorageCluster Custom Resource Definition (CRD). Specifically, the administrator must edit the StorageCluster resource and update the .spec.image field to point to the new version image, such as changing portworx/oci-monitor:3.0 to portworx/oci-monitor:3.1. This change instructs the Operator to roll out the new image across the cluster nodes, performing a seamless upgrade with minimal downtime. The pxctl CLI does not perform upgrades in Operator-managed environments; it is primarily for direct cluster management. The Operator ensures orderly upgrade sequencing, node by node, handling pod restarts and health checks. Automatic upgrades without manual intervention are not currently supported to prevent unintentional disruptions. Official Portworx upgrade documentation details this procedure, emphasizing the importance of version pinning and controlled rollout for production stability and rollback capabilities during upgrades【Pure Storage Portworx Upgrade Guide†source】.