Pure Certified FlashArray Storage Professional Questions and Answers
What is the proper procedure for stopping asynchronous replication and in-progress transfers?
Options:
Removing the volume member from a protection group
Disabling the replication schedule
Disallowing the protection group at the target
Answer:
CExplanation:
According to the official Pure Storage FlashArray Asynchronous Replication Configuration and Best Practices Guide , the proper and immediate method to halt an active, in-progress asynchronous replication transfer is by disallowing the protection group at the target .
When you navigate to the target FlashArray and disallow the specific Protection Group, Purity immediately breaks the replication authorization for that group. If there is an in-progress snapshot transfer occurring at that exact moment, the transfer is immediately stopped, and the partially transferred snapshot data is discarded on the target side.
Here is why the other options are incorrect:
Disabling the replication schedule (B): Toggling the replication schedule to " Disabled " only prevents future scheduled snapshots from being created and sent. It does not kill or interrupt a replication transfer that is already currently in progress.
Removing the volume member from a protection group (A): Modifying the members of a protection group updates the configuration for the next snapshot cycle. It does not actively abort the transmission of the current point-in-time snapshot that the array is already busy sending over the WAN.
Which protection group cannot be ratcheted for SafeMode?
Options:
A default protection group
Protection groups with hosts or hostgroups
A protection group without a local snapshot schedule
Answer:
CExplanation:
What is SafeMode Ratcheting?: SafeMode is Purity ' s " immutability " feature that prevents snapshots from being deleted, eradicated, or modified, even by an administrator with compromised credentials. Ratcheting is the process of increasing the protection levels (like extending the retention period) for a protection group (pgroup) to ensure even stricter data safety.
The Dependency on Local Snapshots: SafeMode ' s primary function is to protect point-in-time copies of data residing on the array. For a protection group to be " ratcheted " into a SafeMode-protected state, it must have an active Local Snapshot Schedule .
Why Option C is the Constraint: If a protection group does not have a local snapshot schedule, there are no local snapshots being generated for SafeMode to " lock. " SafeMode cannot protect what doesn ' t exist locally. While a pgroup might be used for replication only, SafeMode requires the local scheduling component to be active and configured to apply its immutable retention policies.
Why Option B is incorrect: Protection groups are designed to contain hosts, host groups, or volumes. This is the standard way to group related data for snapshot consistency and has no negative impact on SafeMode eligibility.
Operational Note: When you enable SafeMode on a protection group with a local schedule, the " Erradicate " button for those snapshots is disabled. To " ratchet " the protection, you typically work with Pure Storage Support to ensure the retention settings meet your compliance needs.
What command must an administrator run to use newly installed DirectFlash Modules (DFM)?
Options:
pureadmin -- admit-drive
purearray admit drive
puredrive admit
Answer:
CExplanation:
When new DirectFlash Modules (DFMs) or data packs are physically inserted into a Pure Storage FlashArray, the Purity operating environment detects the new hardware but places the drives in an " unadmitted " state. This safety mechanism prevents the accidental incorporation of drives and allows the system to verify the firmware and health of the modules before they are actively used to store data.
To formally accept these drives into the system ' s storage pool so their capacity can be utilized, the administrator must execute the CLI command puredrive admit. Once this command is run, the drive status transitions from " unadmitted " to " healthy, " and the array ' s usable capacity expands accordingly.
Here is why the other options are incorrect:
pureadmin -- admit-drive (A): This is syntactically incorrect. The pureadmin command suite is used for managing administrator accounts, API tokens, and directory services, not for hardware or drive management.
purearray admit drive (B): This is also incorrect syntax. While purearray is used for array-wide settings and status (like renaming the array or checking space), specific drive-level operations are exclusively handled by the puredrive command structure.
What is the recommended Maximum Transmission Unit (MTU) size for the replication ports on a FlashArray?
Options:
4200
1500
9000
Answer:
CExplanation:
Pure Storage strongly recommends an MTU size of 9000 (Jumbo Frames) for replication networks—such as those used for Asynchronous Replication, ActiveCluster, and ActiveDR—as well as for iSCSI and NVMe/TCP data networks.
A 9000-byte MTU significantly reduces protocol overhead and CPU processing load on the storage controllers by allowing a much larger payload of data to be transmitted inside a single network packet. During heavy replication, this drastically increases throughput and maximizes bandwidth efficiency.
Here is why the other options are incorrect:
1500 (B): While 1500 bytes is the standard default MTU for Ethernet and is exactly what Pure Storage recommends for the management ports (vir0), it is not the recommended optimization for high-throughput replication traffic. (Note: If your network cannot support 9000 end-to-end, 1500 must be used to prevent packet fragmentation, but 9000 remains the best-practice recommendation).
4200 (A): This is an arbitrary number and is not a standard network MTU size used in Pure Storage environments.
An application engineer reports seeing high latency in their application running in a VMware instance.
What is the best method to determine the source of the latency?
Options:
Analyze performance in the VM Topology in Pure1 for each component in the user ' s data path
Analyze performance charts in vSphere for CPU, Memory, Network, and Storage Path for the user ' s data path.
Analyze load metrics in Pure1 for each volume in the user ' s data path.
Answer:
AExplanation:
Within the Pure Storage ecosystem, the absolute best method to troubleshoot and pinpoint the exact source of VMware latency is to use VM Analytics (VM Topology) in Pure1 .
VM Analytics is a feature built directly into Pure1 that maps the entire data path from the virtual machine all the way down to the physical FlashArray. It provides a visual topology map detailing the VM, Virtual Disk, ESXi Host, Datastore, and FlashArray Volume. By analyzing performance across this topology, an administrator can instantly identify exactly where the latency is being introduced. For example, you can clearly see if the latency spikes at the ESXi host layer (indicating compute contention) or the network layer, even if the FlashArray volume itself is reporting sub-millisecond latency at the storage level.
Here is why the other options are incorrect:
Analyze load metrics in Pure1 for each volume in the user ' s data path (C): Looking exclusively at volume-level metrics on the FlashArray will only tell you the latency from the array ' s perspective. If the latency is being caused by an overloaded ESXi host CPU or a saturated SAN fabric, the FlashArray metrics will look perfectly healthy, and you will fail to identify the source of the problem.
Analyze performance charts in vSphere for CPU, Memory, Network, and Storage Path for the user ' s data path (B): While vCenter performance charts are useful, they often lack deep storage-array-level context. Pure1 ' s VM Topology is the " best " method because it correlates the vSphere stack data with the native FlashArray telemetry data in a single, unified view, making full-stack root cause analysis much faster.
A storage administrator is tasked with providing real-time data and alerts to the Network Operations Center (NOC) dashboard.
What source should the information come from to provide real-time data?
Options:
Pure Performance Monitoring
Pure1
FlashArray
Answer:
CExplanation:
To provide true real-time data and alerts directly to a Network Operations Center (NOC) dashboard, the information must be sourced directly from the FlashArray . The FlashArray ' s Purity operating environment natively supports real-time data streaming and alerting integrations via protocols like Syslog, SNMP traps, and the local REST API. Polling the array directly or configuring it to push alerts guarantees that the NOC receives instantaneous, up-to-the-second notifications regarding array health, hardware faults, and performance metrics.
Here is why the other options are incorrect:
Pure1 (B): While Pure1 is Pure Storage ' s powerful, cloud-based monitoring and predictive analytics platform, it relies on phone-home telemetry data. This telemetry is batched and transmitted from the array to the Pure1 cloud on a short polling interval (typically a few minutes). Because of this transmission and processing interval, Pure1 provides near-real-time (lagging by a few minutes) and historical data. It is excellent for global fleet management and predictive support, but not for instantaneous, zero-latency NOC alerting.
Pure Performance Monitoring (A): This is a distractor. There is no standalone product or specific protocol in the Pure Storage ecosystem officially named " Pure Performance Monitoring. " Performance monitoring is simply a feature accessed via the FlashArray GUI/CLI or the Pure1 platform.
A storage administrator needs to determine what actions were taken on the array by the previous shift and is only able to access the FlashArray via CLI.
Which command provides that information?
Options:
pureaudit list -- puremessage
pureaudit list
puremessage list
Answer:
BExplanation:
Understanding the Audit Log: In Purity, accountability and security are maintained through the Audit Log . This log captures every administrative action taken on the array, whether through the GUI, CLI, or REST API. It records who performed the action, what the action was (e.g., volume creation, host deletion), and when it occurred.
The CLI Command: The command pureaudit list is the specific CLI tool used to display these logs. By default, it lists events in chronological order, making it the perfect tool for an administrator to review " shift change " activities.
Command Options: * pureaudit list can be filtered with flags like --user to see actions by a specific admin, or --start-time and --end-time to narrow down the " previous shift " window.
Why Option C is incorrect: puremessage (accessed via puremessage list) is used to view Alerts and Notifications generated by the system (e.g., a failed drive or a high-temperature warning). While it tells you what the array did, it does not track what users did.
Why Option A is incorrect: This is not a valid Purity command syntax. Purity does not use double-dashes to " pipe " or combine independent commands like pureaudit and puremessage in that manner.
What is unified storage for Pure?
Options:
FlashArray runs both NFS and SMB protocols.
FlashArray runs both iSCSI and Fibre Channel (FC) protocols.
FlashArray runs both Block and File level protocols.
Answer:
CExplanation:
Defining Unified Storage: In the storage industry, " Unified Storage " refers to a single storage platform that can simultaneously serve data over both block-level and file-level protocols.
The Pure Storage Approach: Historically, FlashArray was a high-performance block-only array. However, with the introduction of FlashArray File Services , Pure transitioned to a unified architecture. This means the same hardware (FlashArray//X, //C, or //XL) and the same management interface (Purity) handle both types of workloads.
Protocol Support:
Block Protocols: Fibre Channel (FC), iSCSI, and NVMe-over-Fabrics (NVMe-oF).
File Protocols: NFS (Network File System) and SMB (Server Message Block).
Why this is " Unified " : * Shared Pool of Resources: Unlike older legacy systems that used " file gateways " or separate hardware heads for NAS, Pure’s unified storage shares a single global pool of flash memory and deduplication metadata.
Ease of Management: Administrators don ' t need to manage two different systems. You can create a Volume (Block) or a File System (File) from the same " Add " menu in the GUI.
Why Options A and B are incorrect: * Option A only describes the File side of the equation.
Option B only describes the Block side of the equation.
Only Option C accurately captures the combination of both paradigms, which is the definition of " Unified. "
A storage administrator is troubleshooting multipathing issues.
What is the CLI command that allows the administrator to sample the I/O balance information at a consistent interval?
Options:
purehost monitor --balance --interval 15 --repeat 5
purehost monitor --balance --resample 5
purehost monitor --balance --interval 15
Answer:
CExplanation:
Command Purpose: The purehost monitor command is the primary tool in the Pure Storage CLI for observing real-time performance and connectivity health from the perspective of the hosts connected to the FlashArray.
The --balance Flag: When the --balance flag is added, the output shifts from general performance (IOPS, bandwidth, latency) to showing how I/O is distributed across the available paths (controllers and ports). This is critical for identifying " unbalanced " loads, which usually point to misconfigured MPIO (Multi-Path I/O) on the host side (e.g., a host only using one controller ' s ports).
Interval vs. Repeat:
The --interval flag specifies the time in seconds between each sample. In option C, --interval 15 tells the array to refresh the data every 15 seconds.
The --repeat flag (seen in option A) is used to limit the total number of samples taken before the command exits. However, in standard troubleshooting, the administrator typically wants a consistent stream of data until manually stopped (Ctrl+C).
--resample (seen in option B) is not a valid flag for the purehost monitor command in Purity.
Best Practice: When troubleshooting multipathing, Pure Storage recommends monitoring the balance to ensure that the " Relative I/O " percentage is roughly equal across all active paths. Large discrepancies often indicate that the host ' s MPIO policy is set to " Failover Only " instead of the recommended " Round Robin " or " Least Queue Depth. "
A FlashArray administrator is configuring new hosts. There is an option in the personality settings for the target OS.
When is the best time to configure the personality for a host in Purity?
Options:
When a host is initially created and before volumes are connected to the host.
Host personalities can be configured at anytime except for the ESXi operating system.
After the host has been created and volumes are connected to the host.
Answer:
AExplanation:
Definition of Host Personality: In Purity//FA, a Host Personality is a setting applied to a host object that modifies how the FlashArray communicates with that specific initiator. It ensures the array sends the correct SCSI responses that the target Operating System (OS) expects. Common personalities include ESXi, AIX, HP-UX, and Hitachi-VSP.
The Importance of Timing: The best practice is to set the personality during the host creation phase , before any volumes are attached or I/O has commenced. This ensures that from the very first " Inquiry " command sent by the host, the FlashArray responds with the appropriate settings (such as specific VAAI primitives for ESXi or specific ALUA behaviors for other Unix variants).
Risks of Changing Later: While Purity allows you to change a host personality later, doing so while volumes are connected and I/O is active can be disruptive. For many operating systems, a change in personality requires the host to be rebooted or the storage paths to be " rescanned " to recognize the change in device capabilities.
Default Behavior: If no personality is selected, the FlashArray uses a " Generic " personality suitable for standard Windows and Linux distributions. However, for specialized hypervisors like ESXi , failing to set the personality correctly from the start can lead to performance issues or lack of support for hardware acceleration features.
Why Option C is incorrect: Changing the personality after volumes are connected is reactive rather than proactive. It increases the risk of the host misinterpreting the storage device ' s capabilities, potentially leading to mount failures or path instability.
What is the purpose of a Protocol Endpoint volume?
Options:
It allows for volumes of the same name within host groups.
It serves as a mount point for vVols.
It is required to set Host Protocol.
Answer:
BExplanation:
In a VMware vSphere environment utilizing Virtual Volumes (vVols), a Protocol Endpoint (PE) acts as a crucial logical proxy or I/O access point between the ESXi hosts and the storage array.
Unlike traditional VMFS datastores where the host mounts a massive LUN and places all VM files inside it, vVols map individual virtual machine disks directly to native volumes on the FlashArray. Because a single ESXi host could potentially need to communicate with thousands of individual vVol volumes, it would be extremely inefficient to map every single one directly to the host. Instead, the ESXi host mounts the Protocol Endpoint , and the storage array uses this PE to dynamically route the I/O to the correct underlying vVol. On a Pure Storage FlashArray, creating and connecting a PE volume to your ESXi host groups is a mandatory prerequisite for setting up a vVol datastore.
Here is why the other options are incorrect:
It allows for volumes of the same name within host groups (A): Purity OS requires all volume names across the entire FlashArray to be completely unique, regardless of which host group they are connected to or whether a Protocol Endpoint is in use.
It is required to set Host Protocol (C): The host communication protocol (such as iSCSI, Fibre Channel, or NVMe-oF) is determined by the physical host bus adapters (HBAs), network interface cards (NICs), and the configuration of the Host object in Purity, not by the creation of a volume type like a PE.
What happens when you demote the original source pod?
Options:
It saves a temporary copy of the source pod content in the eradication bin.
Replication is reversed.
Replication is paused.
Answer:
BExplanation:
ActiveCluster and Pod Roles: In a Pure Storage ActiveCluster or ActiveDR environment, a Pod is a management container for volumes. To move workloads or perform a planned failover between two arrays, you use the Promote and Demote commands.
The Reversal Process: When you have two pods in a replication relationship (Source and Target), data flows from the Promoted (Active/Source) pod to the Demoted (Passive/Target) pod.
When you Demote the current source, it transitions from a " read-write " state to a " read-only " (passive) state.
If the other pod in the pair is then Promoted , Purity automatically intelligently reverses the direction of replication . The array that was previously receiving data now begins sending incremental updates back to the original source.
Continuous Protection: This design ensures that you don ' t have to manually tear down and recreate replication schedules every time you switch production sites. The system tracks the metadata changes and ensures that only the delta (changed blocks) are sent in the new direction.
Why Option C is incorrect: If replication were simply paused, the two sites would quickly drift out of sync, making it impossible to fail back without a full baseline resync.
Why Option A is incorrect: Demoting a pod does not delete any data; it simply changes the access characteristics and replication role. The data remains fully intact on the storage media.
An On-Premises ActiveCluster (AC) Mediator is installed on an ESXi server. The mediator was previously online but when the administrator checked the status of the ActiveCluster (AC) pods the mediator status was listed as " unreachable " for both FlashArrays in the ActiveCluster (AC) pair.
What is a possible cause of the mediator being unreachable from both FlashArrays?
Options:
Fibre Channel (FC) zoning or network access has not been created properly for the host.
The mediator does not reside within a Pure datastore.
Outbound TCP port 80 is not allowed from the FlashArrays.
Answer:
CExplanation:
The ActiveCluster Mediator (whether it is the Pure1 Cloud Mediator or the On-Premises VM) is a lightweight tie-breaker that communicates continuously with the management interfaces of both FlashArrays. If it was previously online and suddenly reports as " unreachable " from both arrays simultaneously, the issue is almost always caused by a network interruption or firewall rule change blocking the required communication ports between the arrays ' management IP addresses and the Mediator VM.
If a network firewall is suddenly configured to drop or deny outbound TCP traffic (such as port 80/443 depending on the specific HTTP/HTTPS discovery and heartbeat configuration) from the FlashArrays to the ESXi-hosted Mediator, the arrays will fail to send their heartbeats, causing the mediator status to drop to " unreachable. "
Here is why the other options are incorrect:
Fibre Channel (FC) zoning or network access has not been created properly for the host (A): The Mediator is completely independent of the front-end host storage fabric (Fibre Channel or iSCSI). Host zoning issues would prevent the ESXi server from seeing its volumes, but it would not cause the FlashArrays to lose management network connectivity to the Mediator.
The mediator does not reside within a Pure datastore (B): This is actually a strict best practice and requirement . Pure Storage explicitly states that the On-Premises Mediator VM must be deployed in a separate (third) failure domain. It should not reside on the ActiveCluster mirrored datastore, because a site-wide SAN failure would take the mediator offline exactly when it is needed most. Therefore, not residing on a Pure datastore is the correct setup, not a cause for an outage.
How are in-progress asynchronous snapshot transfers monitored from the UI?
Options:
From the replication target
From the either the replication source or target
From the replication source
Answer:
AExplanation:
According to official Pure Storage documentation regarding Asynchronous Replication management, while replication throughput (bandwidth) can be viewed globally on the Analysis tab, the actual replication status for in-progress snapshot transfers is tracked and monitored on the replication target .
To monitor an in-progress asynchronous transfer from the GUI, a storage administrator must log into the target FlashArray, navigate to Storage - > Protection Groups , and look at the Transfers section within the Protection Group Snapshots panel. This view explicitly details the time the replicated snapshot was created on the source, the time the transfer started, and the current progress of the snapshot being received. If a transfer is currently in-progress, the " Completed " column will remain blank until the snapshot is fully safely written to the target array.
Here is why the other options are incorrect:
From the replication source (C): While the source orchestrates the creation of the snapshot and initiates the data push, the granular transfer completion status and historical transfer logs of the incoming snapshots are tracked on the target ' s Protection Group interface.
From the either the replication source or target (B): Because the specific " Transfers " tracking panel for asynchronous protection group snapshots is located on the receiving end (target), monitoring the granular completion status cannot be done symmetrically from either side in the UI.
A storage administrator is configuring a new volume and wants to provision 500GB. If the administrator accidentally selects PB, what will happen?
Options:
The volume will be created and space will immediately be used.
The volume will be created but a warning will be displayed.
The volume will not be created and a warning will be displayed.
Answer:
BExplanation:
Pure Storage FlashArrays utilize Thin Provisioning as a core, always-on architectural principle. When a volume is created, the " size " assigned to it is merely a logical limit (a quota) presented to the host; no physical back-end flash capacity is allocated or " pinned " at the time of creation.
Because of this architecture, Purity allows administrators to create volumes that are significantly larger than the actual physical capacity of the array (this is known as over-provisioning). If an administrator accidentally selects PB (Petabytes) instead of GB, the Purity GUI will allow the volume to be created because it is a logical operation that doesn ' t immediately consume 1PB of physical flash. However, Purity includes a built-in safety check: if the requested logical size is exceptionally large or exceeds the current physical capacity of the array, the GUI will present a warning or confirmation prompt to ensure the administrator is aware of the massive logical size being provisioned before finalizing the change.
Here is why the other options are incorrect:
The volume will be created and space will immediately be used (A): This describes " Thick Provisioning, " which Pure Storage does not use. Space is only consumed on a FlashArray when unique data is actually written by the host and processed by the deduplication and compression engines.
The volume will not be created and a warning will be displayed (C): Purity does not strictly forbid over-provisioning. While it warns the user to prevent human error, it does not block the creation of the volume, as over-provisioning is a standard practice in thin-provisioned environments.
An engineer is tasked by the IT security team to pull audit trail logs from the last month. The engineer navigates to the audit trail section of the FlashArray GUI, but sees the audit trail only contains a maximum of 1000 records.
What step should the engineer take?
Options:
Use the CLI, as it has the ability to specify a date range for logs.
Log in to Pure1 to access historical audit trail items.
Update the Purity tunable on the array to increase the audit trail data.
Answer:
BExplanation:
Local Array Limitations: The FlashArray GUI and CLI maintain a local buffer for audit logs (which track commands, logins, and configuration changes). However, this local storage is limited in size and record count (typically around 1000 records or a short timeframe) to ensure that logging does not consume excessive system resources on the controllers. Once the limit is reached, older records are overwritten (FIFO - First In, First Out).
Pure1 as the Historical Repository: Pure1 is Pure Storage’s cloud-based management and monitoring platform. One of its primary functions is to act as a long-term repository for array data. FlashArrays " phone home " their audit logs to Pure1, where they are indexed and stored for much longer periods (typically up to one year or more, depending on the subscription level).
Auditing in Pure1: By logging into the Pure1 portal, an administrator can navigate to the Audits section. Unlike the local GUI, Pure1 allows users to filter by specific date ranges, specific arrays, and specific users across the entire fleet. This makes it the standard tool for security audits and compliance reporting.
Why Option A and C are incorrect: * Option A: While the CLI is powerful, it still pulls from the same limited local buffer as the GUI. If the record has been overwritten locally, the CLI cannot retrieve it.
Option C: Purity does not typically allow customers to modify " tunables " to increase log storage, as this could impact the stability or performance of the Purity Operating Environment.
What is the Pure Storage recommended Maximum Transmission Unit (MTU) size for the replication ports on a FlashArray?
Options:
9216
1500
9000
Answer:
CExplanation:
Understanding MTU: The Maximum Transmission Unit (MTU) defines the largest size of a packet or frame that can be sent in a single network transaction. The standard Ethernet MTU is 1500 bytes . Anything larger than 1500 bytes is referred to as a Jumbo Frame .
Replication Efficiency: Replication involves moving large amounts of data between arrays. Using standard 1500-byte frames results in higher overhead because the CPU must process a larger number of headers for the same amount of data. By increasing the MTU, the FlashArray can pack more data into each frame, reducing CPU interrupts and improving overall throughput.
The Pure Recommendation: Pure Storage specifically recommends an MTU of 9000 for both iSCSI and Replication traffic. This is the industry standard for Jumbo Frames that balances efficiency with compatibility across most enterprise-grade switches.
Configuration Requirements: It is critical to remember that MTU must be configured end-to-end . For an MTU of 9000 to work on the replication ports:
The FlashArray replication ports must be set to 9000.
The network switches along the path (and any routers/ISLs) must support and be configured for at least 9000.
The target array ' s replication ports must also be set to 9000.
Why 9216 (Option A) is incorrect: While some switches support a " Baby Giant " or slightly larger MTU like 9216 to account for VLAN tagging overhead, Pure ' s internal and best practice documentation specifically points to 9000 as the standard setting for the array ' s interface.
In Pure Protect //DRaaS, the administrator modified the business policy used for backups, reducing the " DR Retention " from 7 days to 3 days. The DR target environment currently has 7 days of backups.
What will occur?
Options:
The change will error out, requiring manual expiration of backups older than 3 days.
Earlier backups will be erased to match the modified policy.
Earlier backups will be retained until they expire according to the pre-modification policy, with new backups following the updated policy.
Answer:
BExplanation:
Policy-Driven Automation: Pure Protect //DRaaS (Disaster Recovery as a Service) is built on a declarative policy engine. When you define a business policy (Protection Group or similar policy-based management), the system ' s primary goal is to bring the environment into compliance with the " Desired State " defined by that policy.
Retention Enforcement: When the retention period is reduced (e.g., from 7 days down to 3 days), the Purity/Pure Protect engine identifies that any existing snapshots or backups older than the new 3-day threshold are now " out of policy. "
Immediate Reclamation: Unlike some legacy backup systems that only apply new retention settings to future backups, Pure Storage ' s policy-driven architecture typically triggers an immediate cleanup of the now-obsolete data to reclaim space on the target. This ensures the environment matches the modified policy requirements immediately upon the policy update.
SafeMode Considerations: If SafeMode is enabled on the target, these " erased " backups will actually move into the " Destroyed " (but not yet eradicated) bucket for the duration of the SafeMode timer, providing a safety net against accidental policy changes or malicious deletions. However, from the perspective of the active DR policy, they are removed.
What should an administrator configure when setting up device-level access control in an NVMe/TCP network?
Options:
VLANs
NQN
LACP
Answer:
BExplanation:
In any NVMe-based storage fabric (including NVMe/TCP, NVMe/FC, and NVMe/RoCE), the standard method for identifying endpoints and enforcing device-level access control is the NQN (NVMe Qualified Name) .
The NQN serves the exact same purpose in the NVMe protocol as an IQN (iSCSI Qualified Name) does in an iSCSI environment, or a WWPN (World Wide Port Name) does in a Fibre Channel environment. It is a unique identifier assigned to both the host (initiator) and the storage array (target subsystem). When setting up access control on a Pure Storage FlashArray, the storage administrator must capture the Host NQN from the operating system and configure a Host object on the array with that specific NQN. This ensures that only the authorized host can discover, connect to, and access its provisioned NVMe namespaces (volumes).
Here is why the other options are incorrect:
VLANs (A): Virtual LANs are used for network-level isolation and segmentation at Layer 2 of the OSI model. While you might use a VLAN to separate your storage traffic from your management traffic, it is a network security measure, not a device-level access control mechanism for the storage protocol itself.
LACP (C): Link Aggregation Control Protocol (LACP) is a network protocol used to bundle multiple physical network links into a single logical link for redundancy and increased bandwidth. It has nothing to do with storage access control or mapping volumes to hosts.
An administrator wants to upgrade an Edge Services agent and sees the Gateway Update Status in the GUI showing " Eligible (updates disallowed) " .
What should the administrator do?
Options:
Enable agent updates via cli with the command " puresupport enable edge-agent-update " .
Remove and re-install the edge agent you want to update.
Log in to the GUI as an array admin and allow Edge Agent updates.
Answer:
CExplanation:
Edge Services and Gateways: Pure Storage FlashArray uses Edge Services (often associated with FA File or cloud integrations) to manage communication between the array and external services. The Gateway is the component that facilitates this secure connection.
Update Policy Control: To prevent unplanned outages or changes to the environment, Purity includes a safety toggle for Gateway updates. When the status shows " Eligible (updates disallowed) " , it means a newer version of the agent is available on the Pure Storage back-end, but the array ' s local policy is currently set to prevent automatic or manual " one-click " updates.
GUI Authorization: This is a security and administrative control. An administrator with Array Admin privileges must navigate to the Edge Services/Gateway configuration section in the Purity GUI and explicitly change the setting to " Allow Updates " . Once this toggle is enabled, the status will change to " Eligible, " and the update can be initiated.
Why Option A is incorrect: While the CLI is used for many advanced support functions, the puresupport namespace is generally reserved for Pure Storage Support technicians and requires a challenge-response session key. Standard agent updates are handled via the administrative GUI.
Why Option B is incorrect: Removing and re-installing the agent is an unnecessary and disruptive process. The " disallowed " status is simply a policy setting, not a corruption of the agent itself.
The Load Meter in the Pure1 GUI shows a consistently high workload, averaging a 90% load over the past hour. The array also has high space usage of 85%.
What is the expected result?
Options:
The FlashArray will use QoS to limit impact of incoming IO to verify system processes are functioning for health of array.
The FlashArray will limit Space Reclamation and Space Reporting on array.
The FlashArray will prioritize Space Reclamation so array does not exceed 90% full.
Answer:
AExplanation:
Understanding the Load Meter: The Load Meter in Pure1 and Purity represents the percentage of the array ' s performance capacity currently being utilized. It takes into account CPU cycles, back-end metadata processing, and front-end I/O. A 90% load means the controllers are nearly saturated.
The Impact of Capacity on Load: As a FlashArray fills up (specifically beyond 80% ), the Purity Operating Environment must work harder to find and organize free space. This " Garbage Collection " (GC) process becomes more intensive, which consumes more controller resources and contributes to a higher Load Meter reading.
Internal System QoS: To ensure the stability and integrity of the storage, Pure Storage uses Internal Quality of Service (QoS) . This is an " always-on " feature that prioritizes critical system processes (like metadata updates, internal health checks, and data protection) over incoming host I/O during periods of extreme resource contention.
Graceful Performance Pacing: When the load is consistently high (like the 90% described), Purity may introduce small amounts of latency to the host I/O (often seen as " Wait " or " Queue " time) to " pace " the workload. This prevents the controllers from reaching a 100% " locked " state, ensuring the array remains responsive and healthy even under heavy pressure.
Why Option C is incorrect: While the array needs to reclaim space, prioritizing Space Reclamation (a background task) during a 90% performance load would likely push the controllers to 100% load, causing significant latency spikes or instability for the host. The system must balance reclamation with active production I/O.
Which of the following statements regarding REST APIv1 and REST APIv2 is true?
Options:
REST API 1.x will no longer continue to receive feature enhancements for new Purity features.
REST API 2.x has not yet reached feature parity with REST API 1.x.
REST API 1.x and REST API 2.x operations are not supported side by side, support must be contacted to upgrade to REST API 2.x.
Answer:
AExplanation:
API Evolution: Pure Storage introduced REST API 2.x to provide a more scalable, standardized, and performant way to automate FlashArray management. It uses a different authentication method (OAuth2 with API Clients) compared to the API Token-based method in 1.x.
Feature Freeze on 1.x: As of Purity 6.x and beyond, Pure Storage has designated REST API 1.x as " Legacy. " While 1.x is still supported for backward compatibility to ensure older scripts don ' t break, all new Purity features (such as specialized ActiveDR commands, advanced File Services, or new hardware capabilities) are only developed and exposed via REST API 2.x .
Side-by-Side Support: Contrary to option C, both versions are supported side-by-side on the same array. An administrator can run a script using 1.x for volume creation and another script using 2.x for performance monitoring simultaneously without contacting support.
Feature Parity: REST API 2.x has long since reached and exceeded the capabilities of 1.x. It offers improved filtering, pagination, and a more consistent object model (e.g., /volumes instead of multiple nested endpoints).
Best Practice: Pure Storage strongly recommends that all new automation projects use REST API 2.x to ensure access to the full suite of Purity features and to future-proof infrastructure-as-code (IaC) workflows.