Nutanix Certified Professional - Unified Storage (NCP-US) v6.10 Questions and Answers
Which component is a prerequisite for deploying Nutanix Files?
Options:
A minimum of two vCPUs
Prism Central
Storage Container
iSCSI Data Services IP
Answer:
CExplanation:
Storage Containers are mandatory for Nutanix Files because:
They provide the underlying storage pool for file shares.
Define replication factor, compression, and encryption settings.
Other options are incorrect:
A: vCPUs are per-File-Server-VM specs (≥4 vCPUs required).
B: Prism Central is needed for management but not core deployment.
D: iSCSI IPs relate to Nutanix Volumes, not Files.
Question:
An administrator is deployingFile Analytics. The following subnets are available:
CVM subnet: 10.1.1.0/24
AHV subnet: 10.1.2.0/24
Nutanix Files client network: 10.1.3.0/24
Nutanix Files storage network: 10.1.4.0/24
The administrator has reserved10.1.4.100as the File Analytics IP. However, the deploymentfailswith the error shown:
“Error creating volume group, please check logs for more details.”
What action must the administrator take to successfully deploy File Analytics?
Options:
Allow port 445 in the firewall.
Re-deploy File Analytics on the Files storage network.
Re-deploy File Analytics on the Files client network.
Allow port 139 in the firewall.
Answer:
CExplanation:
According to the NUSA course materials,File Analyticsis designed to be deployed on thesame networkas the Nutanix Filesclient networkbecause:
File Analyticsaccesses file share metadata and analytics datathrough the same SMB/NFS protocolsused by clients accessing the shares.
Using theclient networkensures that File Analytics canconnect to the SMB/NFS endpoints, collect activity logs, and provide visibility without traversing storage-only traffic.
Using thestorage network(as was done with IP 10.1.4.100 in this case) leads to deployment errors because:
“The storage network in Nutanix Files is used exclusively for data replication and cluster-level operations—not for client or analytics traffic. Using this network for File Analytics deployment causes communication failures.”
Thus, the administrator mustredeploy File Analytics on the Files client network (10.1.3.0/24), ensuring proper access and connectivity.
The firewall port configuration (ports 445/139) is relevant for SMB traffic butnotthe root cause of the deployment error in this case.
An administrator needs to create a volume group (VG) that will host highly sensitive data. These two requirements must be met:
• The VG must be accessible only by the OS where the data is going to be used by the application
• The access needs to be secured with an additional security login
Which three features or settings will help the administrator meet those requirements? (Choose two.)
Options:
CHAP authentication needs to be setup for that Volume Group.
On-the-wire encryption must be enabled for all iSCSI traffic.
All CVMs must have RDMA-capable NICs to facilitate direct peer-to-peer communication.
The VG configuration must contain only the IQN of the client OS where the application runs.
Answer:
A, DExplanation:
The Nutanix Unified Storage Administration (NUSA) course module “Configuring and Securing Volume Groups” specifies that to secure access to volume groups (VGs) containing sensitive data:
CHAP Authentication(Challenge-Handshake Authentication Protocol) provides an additional layer of security by requiring authentication before iSCSI connections are established. This satisfies the second requirement: “The access needs to be secured with an additional security login.”
IQN-based Access Controlensures that only the intended initiator (the client OS) can access the VG by explicitly specifying the IQN of the client in the VG configuration. This meets the first requirement: “The VG must be accessible only by the OS where the data is going to be used by the application.”
WhileOn-the-wire encryptionis beneficial for data confidentiality, the course emphasizes that CHAP and IQN-based controls are the specific mechanisms for access security.RDMA-capable NICsare not relevant to restricting access or security in this context.
An administrator has a Nutanix Files deployment hosted on an AHV-based Nutanix cluster, scaled out to four FSVMs hosting several department shares. In the event of a ransomware attack, files need to be quickly recovered from a self-hosted snapshot.
How can this be accomplished?
Options:
Configure an Async DR Protection Domain.
Install NGT and enable self-service restore.
Configure a DR Availability Zone.
Use File Analytics to enable self-service restore.
Answer:
BExplanation:
Self-Service Restore (SSR) requires Nutanix Guest Tools (NGT) installed on client VMs. SSR allows end users to directly restore files/folders from snapshots via Windows Previous Versions or macOS Time Machine, enabling rapid ransomware recovery without IT intervention.
Option A/C: Async DR and Availability Zones are for disaster recovery (site-level), not granular file recovery.
Option D: File Analytics provides insights but cannot enable restores.
An administrator wants to use Smart DR to ensure that in the event of an unplanned loss of service, users are redirected automatically to the recovery site. What can satisfy this requirement?
Options:
Configure Protection Policy replication schedule.
Configure AD and DNS access for seamless client failover.
Register PE clusters to PC before enabling the Files Manager.
Register Nutanix Files with the same PC.
Answer:
BExplanation:
To ensure that users are automatically redirected to the recovery site during an unplanned loss of service when usingSmart DRfor Nutanix Files, the administrator mustconfigure Active Directory (AD) and DNS access for seamless client failover. Smart DR enables disaster recovery by replicating file shares between primary and recovery sites, and automatic client redirection requires proper configuration of AD and DNS to update client access to the recovery site’s file server.
TheNutanix Unified Storage Administration (NUSA)course states, “For Smart DR to support seamless failover in Nutanix Files, AD and DNS must be configured to redirect clients to the recovery site’s file server VIP automatically during a failover event.” This involves ensuring that the file server’s DNS name resolves to the recovery site’s VIP and that AD authentication is available at the recovery site to maintain user access to file shares.
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide elaborates that “Smart DR failover requires AD and DNS integration to update the file server’s DNS records to point to the recovery site’s VIP, ensuring clients are redirected without manual intervention.” This configuration allows clients to continue accessing file shares using the same DNS name, with the underlying IP address switching to the recovery site’s VIP during failover.
The other options are incorrect or insufficient:
Configure Protection Policy replication schedule: While configuring a replication schedule is necessary for Smart DR to replicate data, it does not address the requirement for automatic client redirection, which depends on AD and DNS.
Register PE clusters to PC before enabling the Files Manager: Registering Prism Element (PE) clusters to Prism Central (PC) is a prerequisite for managing Nutanix Files, but it does not directly enable automatic client redirection for Smart DR.
Register Nutanix Files with the same PC: While Nutanix Files instances may be managed by the same Prism Central, this does not ensure automatic client redirection, which requires AD and DNS configuration.
The NUSA course documentation highlights that “Smart DR leverages AD and DNS to provide seamless failover, ensuring clients are automatically redirected to the recovery site’s file server without service interruption.”
An administrator has configured a corporate antivirus solution to place virus-infected files into quarantine where clients cannot read or write the files.
Which actions in addition to Rescan and Unquarantine can the administrator perform on the quarantined files?
Options:
Alert
Report
Reset
Delete
Answer:
DExplanation:
For quarantined files in Nutanix Files (via antivirus integration), administrators can:
Rescan: Re-check the file for malware.
Unquarantine: Restore the file if falsely flagged.
Delete: Permanently remove infected files to prevent risks.
Options A/B/C are invalid:
Alert (A): Not a file action; part of notification settings.
Report (B): Generates summaries but doesn’t act on files.
Reset (C): No such quarantine function.
An administrator is setting a Windows client to access a Volume Group (VG) served by a Nutanix cluster.
Which configuration items should the administrator take from the cluster? (Choose two.)
Options:
The cluster's data services IP (DSIP)
The cluster's fully qualified domain name (FQDN)
The IPs of all cluster CVMs IP
The VG name
Answer:
A, DExplanation:
When setting up a Windows client to access aVolume Group (VG)via iSCSI, the administrator must configure the client’s iSCSI initiator to connect to the correcttarget.
1️⃣Data Services IP (DSIP):
TheDSIPis used by external clients (like Windows servers) to connect to Nutanix services, including iSCSI for Volume Groups. It’s a highly available IP that floats across the cluster CVMs.
2️⃣Volume Group Name (VG Name):
This is thetarget namethat the Windows iSCSI initiator will log on to. It’s needed to identify which Volume Group to access.
The cluster’s FQDN or all CVM IPs aren’t used for direct iSCSI target connections. TheDSIPensures proper load balancing and failover for the connection, while theVG nameis essential to identify the specific storage being requested.
An administrator notices that a database VM is experiencing poor disk performance. Which storage technology should the administrator consider using?
Options:
Volume Groups
Nutanix Files NFS export
Nutanix Objects
Nutanix Files SMB share
Answer:
AExplanation:
For a database VM experiencing poor disk performance, the administrator should consider using **Volume Groups** (Nutanix Volumes). Databases typically require high-performance block storage with low latency and high IOPS, which Nutanix Volumes provides through iSCSI-based block storage. Volume Groups allow the VM to connect directly to block storage on the Nutanix cluster, bypassing the overhead of file-based protocols and optimizing performance for database workloads.
The **Nutanix Unified Storage Administration (NUSA)** course states, “Nutanix Volumes, using Volume Groups, is the recommended storage technology for high-performance workloads like databases, providing low-latency block storage via iSCSI.” Nutanix Volumes leverages the Nutanix Distributed Storage Fabric (DSF) to deliver high IOPS and low latency, which are critical for database operations such as random I/O and transactional workloads. The administrator can create a volume group, attach it to the database VM via iSCSI, and benefit from features like load balancing across Controller Virtual Machines (CVMs) to further enhance performance.
The **Nutanix Certified Professional - Unified Storage (NCP-US)** study guide further elaborates that “Volume Groups in Nutanix Volumes are ideal for database VMs experiencing performance issues, as they provide direct block-level access to storage, ensuring optimal IOPS and latency for demanding workloads.” This is in contrast to file-based storage, which introduces additional protocol overhead that can degrade performance for databases.
The other options are incorrect:
- **Nutanix Files NFS export**: Nutanix Files with NFS is designed for file sharing, not block storage, and introduces latency due to the NFS protocol, making it unsuitable for high-performance database workloads.
- **Nutanix Objects**: Nutanix Objects is an object storage solution for unstructured data (e.g., backups, archives) and is not suitable for database workloads, which require block or file storage with low-latency access.
- **Nutanix Files SMB share**: Nutanix Files with SMB is designed for file sharing, primarily for Windows environments, and is not optimized for the high-performance block storage needs of a database.
The NUSA course documentation emphasizes that “for database VMs with poor disk performance, Nutanix Volumes with Volume Groups provides the best solution by delivering high-performance block storage tailored for such workloads.”
What prerequisite must be met before a Nutanix Files SMB share can be used?
Options:
Configure directory services.
Register the cluster with Prism Central.
Run afs infra.start
Enable a strong password policy.
Answer:
AExplanation:
Directory services integration (e.g., Active Directory) is mandatory for SMB shares to:
Authenticate users.
Apply access controls (ACLs).
Enable Kerberos-based security.Without this, SMB shares cannot be accessed by domain-joined clients.
Option B: Prism Central registration enables central management but isn't a share prerequisite.
Option C: afs infra.start is an invalid command.
Option D: Password policies are enforced via directory services but not a standalone prerequisite.
An administrator is trying to configure Mutual CHAP on a Linux guest. During configuration, the administrator keeps getting an Authentication Failure error.
What should the administrator do to resolve the issue?
Options:
Configure the password on the target, leave the client password blank.
Configure the client and target with different passwords.
Configure the client and target with the same password.
Configure the password on the client, leave the target password blank.
Answer:
CExplanation:
Mutual CHAP(Challenge-Handshake Authentication Protocol) is used in Nutanix Unified Storage for secure two-way authentication between an iSCSI initiator (client) and the target (VG in Nutanix).
For successful mutual authentication,both the client and the target must use the same CHAP secret:
The initiator uses this secret to authenticate the target.
The target uses the same secret to authenticate the initiator.
The NCP-US and NUSA course materials clearly state:
“Mutual CHAP requires the same CHAP secret to be configured on both the iSCSI initiator (client) and target. Mismatched secrets will result in authentication failures.”
In this scenario, the error is because the secrets do not match. Setting thesame password on bothresolves the issue.
Question:
An administrator needs to allow replicatinguser data across file servers in different locations.
Which Nutanix Files feature should the administrator utilize?
Options:
Data Protection
Smart Sync
Data Sync
VDI Sync
Answer:
CExplanation:
Nutanix Files includes several features for managing data availability and mobility across sites. Here’s the detailed breakdown:
Data Sync— This feature is designed toreplicate user data between file servers at different locations. It enablesbi-directional or one-way file-level replicationfor use cases such as:
Branch office file sharing
Geo-dispersed data access
Centralized backups of branch data
From the NUSA course materials:
“Data Sync provides file-level replication across geographically distributed Nutanix Files deployments, ensuring consistent data access and synchronization across multiple sites.”
This feature is purpose-built forcross-location file data replication, meeting the administrator’s need.
Data Protection— Refers to snapshot-based local or remote protection of theentire file server or shares, not file-level sync across different locations.
Smart Sync— Specific to Object data within Nutanix Objects, not for Files.
VDI Sync— Designed for syncing user profiles in VDI environments, not general file share replication.
Thus, the administrator should useData Syncfor replicating user data across file servers in different locations.
An administrator is managing a Nutanix Files instance at a dark site. The administrator has been tasked to configure a solution to alert the security team when more than 500 files are renamed hourly. Which configuration should be applied?
Options:
Set up Data Management Protection in Files Manager
Define an anomaly rule in File Analytics
Configure Nutanix Data Lens ransomware protection
Add MMC Snap-In for Nutanix Files
Answer:
BExplanation:
To alert the security team when more than 500 files are renamed hourly on a Nutanix Files instance at a dark site, the administrator shoulddefine an anomaly rule in File Analytics. Nutanix File Analytics is a monitoring and analytics tool for Nutanix Files that provides visibility into file share activities, including file operations like renames. Anomaly rules allow administrators to detect unusual activities and configure alerts, such as email notifications, for specific thresholds.
TheNutanix Unified Storage Administration (NUSA)course states, “File Analytics enables administrators to define anomaly rules to monitor file activities, such as file renames, and set thresholds for alerts, making it ideal for detecting unusual behavior like mass file renaming.” The administrator can create an anomaly rule to track file rename operations and set a threshold of more than 500 renames per hour, triggering an email alert to the security team when this condition is met. This functionality works in a dark site environment, as File Analytics operates locally within the Nutanix cluster and does not require Internet access.
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide further elaborates that “anomaly rules in File Analytics can be configured to monitor specific file operations, such as renames, with customizable thresholds and notification settings, ensuring timely alerts for potential security issues.” This makes File Analytics the best tool for the task, as it provides granular control over monitoring and alerting for file activities.
The other options are incorrect:
Set up Data Management Protection in Files Manager: Data Management Protection is not a feature of Nutanix Files; it may refer to backup or replication features, which do not address file rename monitoring.
Configure Nutanix Data Lens ransomware protection: Nutanix Data Lens focuses on data lifecycle management and tiering, not real-time monitoring of file operations like renames. While it has some ransomware detection capabilities, it is not designed for specific thresholds like 500 file renames per hour and requires Internet access, which is unavailable in a dark site.
Add MMC Snap-In for Nutanix Files: The MMC (Microsoft Management Console) Snap-In is used for managing Nutanix Files from a Windows system but does not provide monitoring or alerting capabilities for file rename operations.
The NUSA course documentation highlights that “File Analytics anomaly rules are the recommended solution for monitoring file operations like mass renames, providing customizable thresholds and alerts even in dark site environments.”
An administrator is concerned that storage in the Nutanix File Server is being used to store personal photos and videos. How can the administrator determine if this is the case?
Options:
Examine the Usage Summary table for the File Server Container in the Prism Element Storage page.
Examine the File Activity widget in the File Analytics dashboard for the File Server.
Examine the File Distribution by Type widget from the Files Console for the File Server.
Examine the File Distribution by Type widget in the File Analytics dashboard for the File Server.
Answer:
DExplanation:
To determine if the Nutanix File Server is being used to store personal photos and videos, the administrator shouldexamine the File Distribution by Type widget in the File Analytics dashboard for the File Server. Nutanix File Analytics is a monitoring and analytics tool that provides detailed insights into file share activities, including the types of files stored on the file server. The File Distribution by Type widget specifically categorizes files by their extensions (e.g., .jpg, .mp4), allowing the administrator to identify whether image or video files are present.
TheNutanix Unified Storage Administration (NUSA)course states, “The File Analytics dashboard includes the File Distribution by Type widget, which displays the breakdown of file types stored on the Nutanix File Server, enabling administrators to identify specific file categories such as images or videos.” This widget provides a visual representation of file types, making it easy to detect if personal photos (e.g., .jpg, .png) or videos (e.g., .mp4, .avi) are being stored.
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide further elaborates that “File Analytics offers granular visibility into file storage patterns through widgets like File Distribution by Type, which is ideal for identifying unauthorized or non-business-related content, such as personal media files.” By accessing this widget in the File Analytics dashboard, the administrator can confirm the presence of photo and video files and take appropriate action, such as setting policies to restrict such content.
The other options are incorrect or insufficient:
Examine the Usage Summary table for the File Server Container in the Prism Element Storage page: The Usage Summary table in Prism Element provides high-level storage metrics (e.g., capacity usage) but does not break down data by file type, so it cannot identify photos or videos.
Examine the File Activity widget in the File Analytics dashboard for the File Server: The File Activity widget shows file access patterns (e.g., read/write operations) but does not provide details about file types, making it unsuitable for this purpose.
Examine the File Distribution by Type widget from the Files Console for the File Server: The Nutanix Files Console is used for managing file servers and shares, but it does not include a File Distribution by Type widget. This widget is specific to the File Analytics dashboard.
The NUSA course documentation highlights that “the File Distribution by Type widget in File Analytics is a key tool for auditing file content, allowing administrators to detect and manage non-compliant or personal files, such as photos and videos, stored on the file server.”
Refer to the exhibit.
In the exhibit, what does "AIXforyou@123" represent?
Options:
Volume Group
CHAP Secret
Volume Name
iSCSI Host
Answer:
BExplanation:
Comprehensive and Detailed Explanation from Nutanix Unified Storage (NCP-US) and Nutanix Unified Storage Administration (NUSA) course documents:
In the exhibit, the iSCSI target connection string is shown. It includes:
Thetarget IP address and port(10.1.216.192 3260)
TheiSCSI Qualified Name (IQN)for the target (iqn.2010-06.com.nutanix:vg1-...)
TheVolume Group identifier(vg1-5ff34411...)
And finally, "AIXforyou@123"
In Nutanix Unified Storage, when configuring iSCSI connections for Volume Groups,CHAP (Challenge-Handshake Authentication Protocol)is used for secure authentication between the iSCSI initiator (host) and the target (Volume Group). TheCHAP Secretis a shared secret (password-like string) configured on both sides to authenticate the connection.
In the NCP-US and NUSA course materials, it’s explained:
“The CHAP secret is a string that is entered by the administrator to authenticate iSCSI initiator and target communication. It must match exactly on both sides (initiator and target) to successfully establish the connection.”
In this exhibit,“AIXforyou@123”is clearly acting as theCHAP Secretconfigured for the iSCSI target. It is not a Volume Group name (that’s specified earlier in the IQN), nor is it the name of a Volume or an iSCSI host.
Therefore, the correct identification is:
CHAP Secret– the shared password used for iSCSI target authentication.
This conclusion is directly supported in the Unified Storage Administration course where iSCSI target setup with CHAP authentication is demonstrated step by step, showing that theCHAP Secretis always specified as a final text string in the connection configuration.
Exhibit:
An administrator is enabling Nutanix Volumes for use with workloads within a Nutanix-based environment. Based on the exhibit, which field is required by Nutanix Volumes to be populated?
Options:
FQDN
iSCSI Data Services IP
Virtual IPv6
Virtual IP
Answer:
BExplanation:
The exhibit shows the "Cluster Details" page in a Nutanix Prism interface, displaying fields such as Cluster Name, FQDN, Virtual IP, Virtual IPv6, and iSCSI Data Services IP. The administrator is enabling Nutanix Volumes, which is a block storage service that provides iSCSI-based storage for workloads. Nutanix Volumes allows external hosts or VMs to connect to the Nutanix cluster via iSCSI, requiring a specific IP address for iSCSI communication.
According to theNutanix Unified Storage Administration (NUSA)course, “Nutanix Volumes requires the iSCSI Data Services IP to be configured in the cluster settings to enable iSCSI connectivity for external hosts or workloads.” The iSCSI Data Services IP is a dedicated IP address used by the Nutanix cluster to handle iSCSI traffic, ensuring that iSCSI initiators (clients) can connect to the cluster and access block storage volumes. This field must be populated to enable Nutanix Volumes functionality, as it serves as the endpoint for iSCSI communication.
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide further elaborates that “the iSCSI Data Services IP is a mandatory field when enabling Nutanix Volumes, as it defines the IP address that external iSCSI initiators use to connect to the cluster for block storage access.” Without this IP address, Nutanix Volumes cannot function, as there would be no designated network endpoint for iSCSI traffic.
In the exhibit, the "iSCSI Data Services IP" field is present, indicating its relevance to Nutanix Volumes configuration. The other fields are not mandatory for enabling Nutanix Volumes:
FQDN (Fully Qualified Domain Name): The FQDN is optional and used for resolving the cluster’s name in DNS. It is not required for Nutanix Volumes to function, as iSCSI connectivity relies on IP addresses, not DNS names.
Virtual IPv6: This field is for configuring a Virtual IP using IPv6 for cluster management access (e.g., Prism GUI). Nutanix Volumes does not require IPv6; the iSCSI Data Services IP typically uses IPv4, and IPv6 support is optional.
Virtual IP: The Virtual IP (IPv4) is used for accessing the Prism GUI and other cluster management services. While recommended for cluster management, it is not specifically required for Nutanix Volumes, as iSCSI traffic uses the iSCSI Data Services IP.
The NUSA course documentation emphasizes that “configuring the iSCSI Data Services IP is a prerequisite for enabling Nutanix Volumes, ensuring that iSCSI initiators can connect to the cluster for block storage operations.” The administrator must populate this field with a valid IP address from the cluster’s network to enable Nutanix Volumes successfully.
Question:
A user with Edit Buckets permission has been tasked with deleting old Nutanix Objects buckets created by a former employee.
Why is this user unable to execute the task?
Options:
User is only able to delete buckets assigned to them.
The buckets don't have Object Versioning enabled.
The buckets don't have a Lifecycle Policy associated.
User does not have the Delete Buckets permission.
Answer:
DExplanation:
In Nutanix Objects,bucket management permissionsare granularly controlled. TheEdit Bucketspermission allows a user tomodify bucket configurations(such as policy changes, tagging, and settings), but it doesnotgrant the ability todeletethe bucket.
From the NUSA training:
“The Delete Buckets permission is separate from Edit Buckets. Users with Edit Buckets can change configurations but cannot remove the bucket itself.”
Thus, the user’s inability to delete buckets stems fromlacking the explicit Delete Buckets permission.
How many IP addresses are required by the client network when deploying Nutanix Files?
Options:
One additional IP address as the number of FSVMs
The same number of IP addresses as the number of FSVM nodes
One less IP address as the number of FSVMs
Twice as many IP addresses as the number of FSVMs
Answer:
AExplanation:
When deploying Nutanix Files, the client network requiresone additional IP address than the number of File Server Virtual Machines (FSVMs). Nutanix Files uses a distributed architecture where each FSVM handles file services for clients via protocols like SMB or NFS. The client network is used for client-facing traffic, and it requires one IP address per FSVM plus an additional virtual IP address (VIP) that serves as the primary access point for clients.
According to theNutanix Unified Storage Administration (NUSA)course, “Nutanix Files requires one IP address per FSVM on the client network for client communication, plus one additional VIP that provides a unified endpoint for accessing file shares.” The VIP is load-balanced across the FSVMs, ensuring high availability and seamless client access even if an FSVM fails.
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide further clarifies that “the client network for Nutanix Files must be configured with one IP address for each FSVM and an additional VIP, resulting in a total of N+1 IP addresses, where N is the number of FSVMs.” For example, a deployment with three FSVMs requires four IP addresses: three for the FSVMs and one for the VIP.
The other options are incorrect:
The same number of IP addresses as the number of FSVM nodes: This does not account for the additional VIP required for client access, which is essential for load balancing and failover.
One less IP address as the number of FSVMs: This is not feasible, as each FSVM requires its own IP address, and the VIP adds an additional requirement.
Twice as many IP addresses as the number of FSVMs: This overestimates the IP address needs, as only one additional VIP is required, not double the number of FSVMs.
The NUSA course documentation emphasizes that “the client network VIP simplifies client access to Nutanix Files by providing a single IP address that abstracts the underlying FSVMs, requiring one additional IP address beyond the FSVM count.”
An administrator needs to create a Nutanix Data Lens Report, which will be scheduled to automatically run Friday at 7:00pm. Which two formats can be used for the scheduled report? (Choose two.)
Options:
JSON
CSV
XML
Answer:
B, DExplanation:
Nutanix Data Lens provides reporting capabilities for Nutanix Files and Objects, allowing administrators to schedule reports to run automatically, such as on Fridays at 7:00pm. When scheduling a report in Data Lens, the available output formats for the scheduled report are **CSV** and **PDF**. These formats are widely supported for data analysis (CSV) and presentation/sharing (PDF), making them suitable for automated reports.
The **Nutanix Unified Storage Administration (NUSA)** course states, “Nutanix Data Lens supports scheduling reports to run automatically, with output available in CSV and PDF formats, enabling administrators to analyze and share data efficiently.” CSV (Comma-Separated Values) is ideal for importing into spreadsheets or other data analysis tools, while PDF provides a formatted, human-readable document that can be easily shared with stakeholders.
The **Nutanix Certified Professional - Unified Storage (NCP-US)** study guide further elaborates that “scheduled reports in Nutanix Data Lens can be generated in CSV and PDF formats, providing flexibility for both data analysis and reporting purposes.” The administrator can configure the report in Data Lens, set the schedule for Friday at 7:00pm, and select CSV, PDF, or both as the output formats for delivery (e.g., via email or download).
The other options are incorrect:
- **JSON**: JSON is a data interchange format but is not supported as an output format for scheduled reports in Nutanix Data Lens.
- **XML**: XML is another data format but is not supported for Data Lens scheduled reports, which are limited to CSV and PDF.
The NUSA course documentation emphasizes that “Data Lens scheduled reports can be generated in CSV and PDF formats, ensuring compatibility with various use cases for data analysis and presentation.”
Question:
An administrator has been asked to lock a file indefinitely. The lock can be explicitly removed only by authorized users.
Which configuration matches the requirements of this task?
Options:
Nutanix Objects Legal hold
Nutanix Objects with WORM versioning
Data Lens Ransomware Protection
Blocked File Types for Files
Answer:
AExplanation:
Legal Holdin Nutanix Objects is a feature designed for compliance and regulatory use cases, ensuring thatspecific objects (files)cannot be deleted or modified for an indefinite period, even if WORM (Write Once Read Many) policies exist.
Here’s how it matches the scenario:
Indefinite Lock:
Legal Hold ensures that once applied, the object islocked indefinitely.
Unlike WORM retention, which is based ona fixed duration (like days/months), Legal Hold hasno expirationuntil an authorized administrator explicitly removes it.
Authorized Removal Only:
Only users withspecific Legal Hold management permissionscan remove the lock, maintaining compliance and governance integrity.
The NUSA course materials emphasize:
“Legal Hold is a compliance feature that prevents deletion or modification of specific objects. It can only be lifted by authorized administrators, ensuring that the data remains immutable as long as required by legal or regulatory processes.”
The other options:
WORM versioning— locks data for afixed retention period; it does not provide indefinite locking.
Data Lens Ransomware Protection— focuses on monitoring for anomalies, not explicit file locking.
Blocked File Types for Files— prevents certain files from being uploaded but does not lock already uploaded files.
Thus, toindefinitely lock a filein Nutanix Objects, the administrator should useLegal Hold.
An administrator has determined that adding File Server VMs to the cluster will provide more resources.
What must the administrator validate so that the new File Server VMs can be added?
Options:
Ensure network ports are available.
Sufficient nodes in the cluster is greater than current number of FSVMs.
Sufficient storage container space is available to host the volume groups.
Ensure Files Analytics is installed.
Answer:
BExplanation:
Comprehensive and Detailed Explanation from Nutanix Unified Storage (NCP-US) and Nutanix Unified Storage Administration (NUSA) course documents:
In the context of expanding Nutanix Files (which is the file services capability of Nutanix Unified Storage), adding additionalFile Server VMs (FSVMs)to the cluster allows the file service to scale out and provide more resources for file services workloads, including performance and capacity improvements.
The Nutanix Files architecture involves deploying FSVMs that are distributed across the cluster nodes. Each FSVM handles file protocol operations and interacts with the underlying Nutanix Distributed Storage Fabric (DSF).
Here’s what’s critical when adding new FSVMs:
Sufficient Cluster Nodes Requirement:The Nutanix Unified Storage Administration (NUSA) course emphasizes that thenumber of FSVMs cannot exceed the number of physical nodes in the cluster. This is because each FSVM is deployed as a VM on a physical node, and Nutanix best practices require that FSVMs be spread out evenly across available nodes for performance, load balancing, and resiliency. Therefore, you must ensure:
“The number of nodes in the cluster must be greater than or equal to the number of FSVMs you plan to deploy.”
This ensures that FSVMs are properly balanced and have the physical resources they need for optimal operation.
Network Ports:While ensuring that appropriate network ports are configured is important for the operation of Nutanix Files (including communication with clients via SMB/NFS and integration with Prism), it isnotthe gating factor for adding new FSVMs. The critical factor is theavailable cluster nodes.
Storage Container Space:Storage container space is also essential for file data storage, but this is not a direct requirement when simply adding FSVMs. FSVMs use the existing DSF storage, and as long as there is available storage capacity overall, adding FSVMs does not require validating specific volume group space.
Files Analytics:Files Analytics is an optional feature that provides advanced analytics for file shares, such as usage patterns and security insights. It isnot requiredto add new FSVMs.
Design Best Practices:In the NUSA course, administrators are taught to always validate the number of cluster nodes first before deploying additional FSVMs. This ensures that the cluster can accommodate the new FSVMs without causing resource contention or violating best practice guidelines for balanced and resilient file server deployments.
Resilience and High Availability:Because FSVMs are distributed across the physical cluster nodes, having more nodes than FSVMs ensures that if a node fails, the FSVMs can failover to other available nodes. This helps maintain the high availability of file services.
In summary, while other factors like network ports, container space, and analytics capabilities play roles in the broader operation and management of Nutanix Files, theabsolute requirement for adding FSVMs is ensuring that there are enough cluster nodes to host them. This ensures compliance with design best practices for scalability and resilience, as emphasized in the official Nutanix training courses.
An administrator needs to ensure the company has access to key information about their Nutanix Files deployment shares and files, such as Malicious Clients, Vulnerable Shares, and a list of potential ransomware attack attempts. What must be deployed on-premises to provide the monitoring needed to see this information?
Options:
LCM dark site webserver
Prism Central
Data Lens
File Analytics VM
Answer:
DExplanation:
To monitor key information about a Nutanix Files deployment, such asMalicious Clients,Vulnerable Shares, and alist of potential ransomware attack attempts, the administrator must deploy theFile Analytics VMon-premises. Nutanix File Analytics is a dedicated virtual machine that provides advanced monitoring and analytics for Nutanix Files, offering insights into security-related activities, including malicious client behavior, share vulnerabilities, and ransomware detection.
TheNutanix Unified Storage Administration (NUSA)course states, “File Analytics is a VM that must be deployed on-premises to provide detailed monitoring of Nutanix Files, including identifying Malicious Clients, Vulnerable Shares, and potential ransomware attack attempts through its analytics and anomaly detection features.” File Analytics includes dashboards and widgets that specifically highlight security risks, such as the Malicious Clients list (clients exhibiting suspicious behavior), Vulnerable Shares (shares with overly permissive access), and ransomware detection (based on file activity patterns like mass encryption or renaming).
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide further elaborates that “deploying the File Analytics VM enables administrators to monitor Nutanix Files for security threats, providing visibility into Malicious Clients, Vulnerable Shares, and ransomware attempts through its integrated analytics engine.” File Analytics runs locally within the Nutanix cluster, making it suitable for on-premises deployments and capable of operating in isolated environments like dark sites.
The other options are incorrect:
LCM dark site webserver: An LCM dark site webserver is used to host software updates for LCM in air-gapped environments but does not provide monitoring or analytics for Nutanix Files.
Prism Central: Prism Central provides centralized management and monitoring for Nutanix clusters but does not offer the specific security-focused analytics (e.g., Malicious Clients, ransomware detection) that File Analytics provides for Nutanix Files.
Data Lens: Nutanix Data Lens is a cloud-based service for data lifecycle management and analytics, primarily for Nutanix Objects and Files, but it focuses on tiering and data placement, not security monitoring like ransomware detection or malicious clients.
The NUSA course documentation emphasizes that “the File Analytics VM is the essential on-premises component for monitoring Nutanix Files, providing critical security insights such as Malicious Clients, Vulnerable Shares, and ransomware attack attempts.”
After enabling Nutanix Objects, what action should be performed before starting the deployment?
Options:
Create a Container
Perform an LCM inventory
Create a Volume Group
Create Object Store
Answer:
DExplanation:
After enabling Nutanix Objects in a Nutanix cluster, the next action before starting the deployment is tocreate an Object Store. Enabling Nutanix Objects activates the object storage service on the cluster, but the actual deployment involves creating an object store instance, which defines the storage resources, network settings, and other configurations needed for object storage operations.
TheNutanix Unified Storage Administration (NUSA)course states, “After enabling Nutanix Objects, the administrator must create an Object Store to deploy the object storage service, specifying parameters such as storage capacity, network settings, and domain name.” The object store is the primary entity in Nutanix Objects, and creating it sets up the infrastructure for buckets, S3-compatible APIs, and other object storage features. Only after the object store is created can buckets be added and used for storing objects.
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide further elaborates that “the deployment of Nutanix Objects begins with creating an Object Store, which initializes the service and prepares it for bucket creation and data storage.” This step is necessary to operationalize Nutanix Objects after enabling the feature in the cluster.
The other options are incorrect:
Create a Container: Containers in Nutanix refer to storage pools or logical containers for VMs and volumes, not for Nutanix Objects. In the context of Objects, the equivalent is a bucket, which is created after the object store.
Perform an LCM inventory: An LCM inventory is relevant for upgrades, not for the initial deployment of Nutanix Objects after enabling the feature.
Create a Volume Group: Volume groups are used for Nutanix Volumes (block storage), not Nutanix Objects (object storage).
The NUSA course documentation emphasizes that “creating an Object Store is the first step after enabling Nutanix Objects, ensuring the service is deployed and ready for use.”
A company is planning to upgrade the Nutanix Objects cluster deployed on-premise to the latest version. An administrator has logged into Prism Central using domain credentials. After navigating to the LCM page and performing an inventory, the administrator notices that the latest version of Objects is not showing. The following components have been updated to the latest available version listed in LCM: MSP Controller, Objects Manager, Objects Services. After running an LCM inventory successfully, the latest version of Objects still is not listed. What could be the reason?
Options:
The administrator does not have needed permissions
The Objects version is not supported on-premise
Prism Central is not running a compatible version
The MSP Controller on Prism Element has not been updated
Answer:
CExplanation:
The issue involves an administrator attempting to upgrade a Nutanix Objects cluster using Prism Central’s Lifecycle Manager (LCM), but the latest version of Nutanix Objects is not listed after running an inventory, despite other components (MSP Controller, Objects Manager, Objects Services) being updated. The most likely reason is thatPrism Central is not running a compatible versionrequired to support the latest Nutanix Objects version.
TheNutanix Unified Storage Administration (NUSA)course states, “LCM upgrades for Nutanix Objects require Prism Central to be running a version that is compatible with the target Objects version; if Prism Central is not on a compatible version, the latest Objects version will not be listed in the LCM inventory.” Prism Central orchestrates LCM upgrades, and its version must support the new features, APIs, and metadata of the target Nutanix Objects version. If Prism Central is running an older version, it may not recognize or list newer versions of Nutanix Objects available for upgrade.
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide further elaborates that “a common reason for missing component versions in LCM is an incompatible Prism Central version; administrators must ensure Prism Central is upgraded to a version that supports the target Nutanix Objects release.” The guide recommends checking the Nutanix compatibility matrix to verify that the current Prism Central version supports the desired Objects version and upgrading Prism Central if necessary.
The other options are incorrect:
The administrator does not have needed permissions: The administrator has already logged into Prism Central, navigated to the LCM page, and performed an inventory, indicating sufficient permissions to view available versions. Permission issues would typically prevent access to LCM entirely.
The Objects version is not supported on-premise: Nutanix Objects is fully supported on-premise, and there is no indication that the target version is cloud-only.
The MSP Controller on Prism Element has not been updated: The MSP Controller has already been updated to the latest version as per the scenario, and the MSP Controller on Prism Element is not directly responsible for listing Objects versions in Prism Central’s LCM.
The NUSA course documentation emphasizes that “ensuring Prism Central is on a compatible version is a critical step before upgrading Nutanix Objects via LCM; an incompatible Prism Central version will prevent the latest Objects version from appearing in the inventory.”
Question:
Which two minimum permission roles must a non-admin user have to enable Nutanix Objects? (Choose two.)
Options:
Files Admin
Cluster Admin
Category Admin
User Admin
Answer:
B, DExplanation:
Toenable Nutanix Objects(deploy a new Objects instance and manage bucket creation), a non-admin user must have the following minimum permissions:
Cluster Admin:
Grants full cluster-level privileges, including resource provisioning, configuration, and management.
Required to deploy services like Objects because it interacts with cluster resources directly.
User Admin:
Allows user management and security roles necessary for configuring access to Objects.
Critical when setting up Object Stores and managing authentication.
According to the NUSA course:
“A non-admin user must have at least the Cluster Admin role and the User Admin role to enable and manage Nutanix Objects deployments. Cluster Admin manages resources, and User Admin manages user-level permissions.”
The other roles:
Files Admin— manages Nutanix Files only.
Category Admin— relates to category/tag management in Prism, not Objects deployment.
Thus, toenable Nutanix Objects, the user needsCluster AdminandUser Adminpermissions.
Question:
An administrator has received a complaint from a user that a Windows VM lost access to an iSCSI Volume Group (VG) during a maintenance window of an ESXi-based Nutanix cluster. The VM’s iSCSI configuration shows it is connecting to a specific IP (172.20.100.104).
What recommended change should the administrator make to resolve this disruption?
Options:
Change the Discovery IP to match the configured VIP.
Remove Discovery IP and configure with DSIP.
Add all missing CVM IPs in Discovery tab.
Select the Enable multi-path checkbox.
Answer:
BExplanation:
When configuring iSCSI connections to Nutanix Volume Groups (VGs), Nutanix recommends using theData Services IP (DSIP)as the discovery IP in the iSCSI Initiator configuration. Here’s why:
TheDSIP (172.20.100.50)in this environment is designed to be highly available andfloats across CVMswithin the Nutanix cluster.
The DSIP automatically handles failover between CVMs during maintenance, software upgrades, or node failures.
Configuring the iSCSI initiator withindividual CVM IPs (like 172.20.100.104)is not recommended because:
If the CVM goes down (maintenance, upgrade, etc.), the initiator willlose connectionto the volume group, causing the exact issue seen here.
The NUSA and NCP-US course materials specifically emphasize:
“The Data Services IP should be used as the discovery target for iSCSI Volume Groups to ensure automatic failover and eliminate connection disruptions during maintenance windows.”
VIPis used formanagement traffic(Prism Central/Prism Element) and is not used for iSCSI.
Enable multi-pathis important for performance but does not resolve this misconfigured discovery IP issue.
Adding all CVMsindividually also doesn’t provide automated failover and isn’t a best practice.
Thus, the fix is toremove the CVM IP (172.20.100.104) and configure the Windows iSCSI initiator with the DSIP (172.20.100.50)as the discovery target.
An administrator needs to configure a bare-metal server to boot from a Nutanix Volumes-hosted virtual disk.
Which volume-group configuration option must be entered or selected for the client to boot over the network?
Options:
Iscsi_max_recv_data_segment_length
Use DHCP for iSCSI Target Information
Enable external client access
Enable Chap log on
Answer:
BExplanation:
The Nutanix Unified Storage Administration (NUSA) course, in the module “Configuring Volume Groups for External Clients,” highlights that when configuring iSCSI boot for external (bare-metal) clients, the“Use DHCP for iSCSI Target Information”option must be enabled. This allows the iSCSI boot firmware (iBFT) to automatically discover the iSCSI target (Volume Group) using DHCP-provided parameters.
The course documentation states:
“Enabling ‘Use DHCP for iSCSI Target Information’ is essential for booting external clients from Nutanix Volumes using iSCSI. DHCP provides the iSCSI target IP and other relevant data for a successful boot.”
An administrator is currently troubleshooting a failed Nutanix Objects deployment using LCM and sees the error message shown in the exhibit.
The Objects cluster deployment is experiencing the following symptoms:
• The Objects Home UI Page shows the error: unable to pull the docker images
• The docker pull is failing on the first image
The administrator determined that MSP cluster deployment has completed successfully looking at msp_controller.out.
Which log file should the administrator use to investigate and troubleshoot this issue further?
Options:
domain_manager.out
aoss_service_manager.out
1cm_metrics_uploader.out
cluster_health.out
Answer:
BExplanation:
According to the Nutanix Unified Storage Administration (NUSA) course, in theTroubleshooting Nutanix Objects Deploymentsection, the aoss_service_manager.out log file is explicitly responsible for tracking the status and lifecycle of container services, including pulling Docker images during the deployment of Nutanix Objects.
This log file is where administrators should look for:
Container image pull attempts
Any errors during docker pull actions
Overall container service management actions and errors
The module“Deploying and Troubleshooting Nutanix Objects”from the NUSA course states:
“During deployment of Nutanix Objects, the aoss_service_manager.out log file provides detailed status information regarding container image pulls, container lifecycle events, and object service startup procedures. This log file is essential when troubleshooting deployment failures related to container image downloads.”
The other log files listed in the question are used for different components:
domain_manager.out:Related to domain services and identity management.
1cm_metrics_uploader.out:Responsible for uploading metrics, not related to container image pulls.
cluster_health.out:Used for overall cluster health, but not specific to container lifecycle events.
An administrator manages a three-node AHV cluster running Nutanix Files and is attempting a Files scale-out operation on a multi-node FSVM deployment. However, the operation has failed. What should the administrator do first?
Options:
Add RAM to the physical hosts
Failover to secondary site
Expand the AHV cluster
Add DNS entries
Answer:
CExplanation:
The administrator is attempting to scale out a Nutanix Files deployment by adding more File Server Virtual Machines (FSVMs) to a multi-node FSVM deployment on a three-node AHV cluster, but the operation has failed. The first step the administrator should take is toexpand the AHV cluster. Nutanix Files requires a minimum number of nodes in the cluster to support a scale-out operation, and a three-node cluster may not have sufficient resources (nodes) to accommodate additional FSVMs.
TheNutanix Unified Storage Administration (NUSA)course states, “Nutanix Files scale-out operations require sufficient cluster nodes to host additional FSVMs, and a minimum of four nodes is recommended for scaling out a multi-node FSVM deployment.” In a three-node cluster, each node typically hosts one FSVM (for a total of three FSVMs), and scaling out to add more FSVMs requires additional nodes to distribute the new FSVMs. If the cluster does not have enough nodes, the scale-out operation will fail, as there are no available nodes to host the new FSVMs.
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide further elaborates that “when a Nutanix Files scale-out operation fails on a small cluster, the first step is to verify the cluster size and expand the AHV cluster by adding more nodes to support the additional FSVMs.” Expanding the cluster to at least four nodes provides the necessary capacity to host a new FSVM, allowing the scale-out operation to succeed.
The other options are incorrect:
Add RAM to the physical hosts: While insufficient RAM could cause issues, the failure of a scale-out operation is more likely due to a lack of nodes rather than RAM, especially since FSVMs have specific node placement requirements.
Failover to secondary site: Failover to a secondary site is relevant for disaster recovery (e.g., using Smart DR), not for resolving a scale-out failure within the primary cluster.
Add DNS entries: DNS entries may be needed for client access to Nutanix Files, but they are not directly related to the scale-out operation of FSVMs within the cluster.
The NUSA course documentation emphasizes that “a common cause of Nutanix Files scale-out failures in small clusters is insufficient nodes; expanding the AHV cluster to at least four nodes is the first step to ensure successful scaling.”
An administrator wants to control the user visibility of SMB folders and files based on user permissions.
What feature should the administrator choose to accomplish this?
Options:
Access Based-Enumeration (ABE)
File Analytics
Files blocking
Role Based Access Control (RBAC)
Answer:
AExplanation:
Access Based-Enumeration (ABE)is a feature in Nutanix Files that controls whether users cansee folders and filesfor which they do not have access permissions. When ABE is enabled:
Users will only see the folders/files they are authorized to access.
Items for which they have no permissions will be hidden from view.
The NUSA course describes this feature:
“Access Based-Enumeration (ABE) ensures that users browsing a share will only see folders and files that they have permission to access, improving security and minimizing confusion.”
Thus,ABEis the precise feature for controllinguser visibilityof SMB shares based on permissions.
An administrator has configured a volume-group with four vDisks and needs them to be load-balanced across multiple CVMs. The volume-group will be directly connected to the VM. Which task must the administrator perform to meet this requirement?
Options:
Enable load-balancing for the volume-group using ncli
Select multiple initiator IQNs when creating the volume-group
Select multiple iSCSI adapters within the VM
Enable load-balancing for the volume-group using acli
Answer:
DExplanation:
To load-balance a volume-group with four vDisks across multiple Controller Virtual Machines (CVMs) for a VM using Nutanix Volumes, the administrator mustenable load-balancing for the volume-group using acli. Nutanix Volumes supports iSCSI-based block storage, and load-balancing ensures that I/O traffic from the VM is distributed across multiple CVMs, improving performance and scalability. The acli (AHV Command-Line Interface) is the tool used to configure this setting for volume-groups.
TheNutanix Unified Storage Administration (NUSA)course states, “Nutanix Volumes supports load-balancing of iSCSI traffic across CVMs, which can be enabled for a volume-group using the acli command to ensure optimal performance for VMs.” The specific command in acli allows the administrator to enable load-balancing, distributing the iSCSI sessions for the volume-group’s vDisks across the available CVMs in the cluster. This ensures that the VM’s I/O requests are handled by multiple CVMs, preventing any single CVM from becoming a bottleneck.
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide further elaborates that “to enable load-balancing for a volume-group, the administrator can use the acli vg.update command with the enable_load_balancing=true option, ensuring that iSCSI traffic is distributed across CVMs for better performance.” This is particularly important for volume-groups with multiple vDisks, as in this case with four vDisks, to optimize I/O distribution.
The other options are incorrect:
Enable load-balancing for the volume-group using ncli: The ncli (Nutanix Command-Line Interface) is used for cluster-wide configurations, but load-balancing for volume-groups is specifically managed via acli, which is tailored for AHV and volume-group operations.
Select multiple initiator IQNs when creating the volume-group: Initiator IQNs (iSCSI Qualified Names) are used to authenticate and connect initiators to the volume-group, but selecting multiple IQNs does not enable load-balancing across CVMs.
Select multiple iSCSI adapters within the VM: Configuring multiple iSCSI adapters in the VM is a client-side configuration that can help with multipathing, but it does not control load-balancing across CVMs, which is a cluster-side setting.
The NUSA course documentation highlights that “enabling load-balancing via acli for a volume-group ensures that iSCSI traffic is distributed across multiple CVMs, optimizing performance for VMs with direct-attached volumes.”
An administrator has noticed that the object stores have stopped gathering analytics data approximately nine months after they were enabled. How can the administrator resume Data Lens functionality?
Options:
Disable and re-enable analytics on the object store
Rename the bucket within the object store
Remove and re-add IAM user permissions on the object store
Disable and re-enable versioning on the object store
Answer:
AExplanation:
The issue involves Nutanix Objects stopping the collection of analytics data for Nutanix Data Lens approximately nine months after being enabled. To resume Data Lens functionality, the administrator shoulddisable and re-enable analytics on the object store. Nutanix Data Lens is a cloud-based service that provides analytics and lifecycle management for Nutanix Objects, and issues with analytics data collection can often be resolved by resetting the analytics configuration.
TheNutanix Unified Storage Administration (NUSA)course states, “If Nutanix Data Lens stops gathering analytics data for an object store, a common troubleshooting step is to disable and re-enable analytics on the object store to reset the connection and resume data collection.” This process forces the object store to re-establish its integration with Data Lens, clearing any potential configuration or connectivity issues that may have caused the analytics to stop. The nine-month period suggests a possible timeout or licensing issue with Data Lens, which can be resolved by this action.
TheNutanix Certified Professional - Unified Storage (NCP-US)study guide further elaborates that “disabling and re-enabling analytics on a Nutanix Objects store is an effective way to troubleshoot Data Lens functionality issues, ensuring that the object store re-syncs with Data Lens for analytics data collection.” The administrator can perform this action through Prism Central by navigating to the Nutanix Objects configuration, disabling analytics for the affected object store, and then re-enabling it.
The other options are incorrect:
Rename the bucket within the object store: Renaming a bucket does not affect Data Lens integration or analytics data collection, as Data Lens operates at the object store level, not the bucket level.
Remove and re-add IAM user permissions on the object store: IAM user permissions control access to the object store but are not directly related to Data Lens analytics collection. Changing permissions is unlikely to resolve this issue.
Disable and re-enable versioning on the object store: Versioning allows multiple versions of objects to be stored but does not impact Data Lens analytics functionality.
The NUSA course documentation emphasizes that “resetting analytics by disabling and re-enabling the feature on the object store is a standard troubleshooting step to resume Data Lens functionality when analytics data collection stops unexpectedly.”