What are two ways to manage Objects? (Choose two.)
PC
CLI
API
SSH
There are two ways to manage Objects: PC (Prism Central) and API (Application Programming Interface). PC is a web-based user interface that allows administrators to create, configure, monitor, and manage Objects clusters, buckets, users, and policies. API is a set of S3-compatible REST APIs that allows applications and users to interact with Objects programmatically. API can be used to perform operations such as creating buckets, uploading objects, listing objects, downloading objects, deleting objects, and so on. References: Nutanix Objects User Guide; Nutanix Objects API Reference Guide
Deploying Files instances require which two minimum resources? (Choose two)
12 GiB of memory per host
8 vCPUs per host
8 GiB of memory per host
4 vCPUs per host
The two minimum resources that are required for deploying Files instances are 8 GiB of memory per host and 4 vCPUs per host. Memory and vCPUs are resources that are allocated to VMs (Virtual Machines) to run applications and processes. Files instances are file server instances (FSIs) that run on FSVMs (File Server VMs) on a Nutanix cluster. FSVMs require at least 8 GiB of memory and 4 vCPUs per host to function properly and provide SMB and NFS access to file shares and exports. The administrator should ensure that there are enough memory and vCPUs available on each host before deploying Files instances. References: Nutanix Files Administration Guide, page 27; Nutanix Files Solution Guide, page 6
An administrator is leveraging Smart DR to protect a Files share. There is a requirement that in the event of a failure, client redirection should be seamless. How should the administrator satisfy this requirement?
Create a reverse replication policy.
Enable redirection in the protection policy.
Update the AD and DNS entries.
Activate protected shares on the recovery site.
Smart DR in Nutanix Files, part of Nutanix Unified Storage (NUS), automates disaster recovery (DR) by replicating shares between primary and recovery file servers (e.g., using NearSync, as in Question 24). The administrator is using Smart DR to protect a Files share and needs seamless client redirection in the event of a failure, meaning clients should automatically connect to the recovery site without manual intervention.
Understanding the Requirement:
Smart DR Protection: Smart DR replicates the Files share from the primary site to the recovery site, typically with the primary site in read-write (RW) mode and the recovery site in read-only (RO) mode (as seen in the exhibit for Question 24).
Seamless Client Redirection: In a failure scenario (e.g., primary site down), clients should automatically redirect to the recovery site without needing to reconfigure their connections (e.g., changing the share path or IP address).
Files Share Context: Clients typically access Files shares via SMB or NFS, using a hostname or IP address (e.g., \fileserver\share for SMB or fileserver:/share for NFS).
Analysis of Options:
Option A (Create a reverse replication policy): Incorrect. A reverse replication policy would replicate data from the recovery site back to the primary site, typically used after failover to prepare for failback. This does not address seamless client redirection during a failure—it focuses on data replication direction, not client connectivity.
Option B (Enable redirection in the protection policy): Incorrect. Smart DR protection policies define replication settings (e.g., RPO, schedule), but there is no “redirection” setting in the policy itself. Client redirection in Nutanix Files DR scenarios is managed through external mechanisms like DNS, not within the protection policy.
Option C (Update the AD and DNS entries): Correct. Seamless client redirection in Nutanix Files DR scenarios requires that clients can automatically connect to the recovery site without changing their share paths. This is achieved by updating Active Directory (AD) and DNS entries:
DNS Update: The hostname of the file server (e.g., fileserver.company.com) should resolve to the IP address of the primary site’s File Server under normal conditions. During a failure, DNS is updated to point to the recovery site’s File Server IP address (e.g., the Client network IP of the recovery FSVMs). This ensures clients automatically connect to the recovery site without changing the share path (e.g., \fileserver.company.com\share continues to work).
AD Update: For SMB shares, the Service Principal Name (SPN) in AD must be updated to reflect the recovery site’s File Server, ensuring Kerberos authentication works seamlessly after failover.This approach ensures clients are redirected without manual intervention, meeting the “seamless” requirement.
Option D (Activate protected shares on the recovery site): Incorrect. Activating protected shares on the recovery site (e.g., making them RW during failover) is a necessary step for failover, but it does not ensure seamless client redirection. Without updating DNS/AD, clients will not know to connect to the recovery site—they will continue trying to access the primary site’s IP address, requiring manual reconfiguration (e.g., changing the share path), which is not seamless.
Why Option C?
Seamless client redirection in a Nutanix Files DR scenario requires that clients can connect to the recovery site without changing their share paths. Updating AD and DNS entries ensures that the file server’s hostname resolves to the recovery site’s IP address after failover, and AD authentication (e.g., Kerberos for SMB) continues to work. This allows clients to automatically redirect to the recovery site without manual intervention, fulfilling the requirement for seamlessness.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“To ensure seamless client redirection during a Smart DR failover, update Active Directory (AD) and DNS entries. Configure DNS to resolve the file server’s hostname to the recovery site’s File Server IP address after failover, and update the Service Principal Name (SPN) in AD to ensure Kerberos authentication works for SMB clients. This allows clients to automatically connect to the recovery site without manual reconfiguration.”
What is the network requirement for a File Analytics deployment?
Must use the CVM not work
Must use the Backplane network
Must use the Storage-side network
Must use the Client-side network
Nutanix File Analytics is a feature that provides insights into the usage and activity of file data stored on Nutanix Files. File Analytics consists of a File Analytics VM (FAVM) that runs on a Nutanix cluster and communicates with the File Server VMs (FSVMs) that host the file shares. The FAVM collects metadata and statistics from the FSVMs and displays them in a graphical user interface (GUI). The FAVM must be deployed on the same network as the FSVMs, which is the Client-side network. This network is used for communication between File Analytics and FSVMs, as well as for accessing the File Analytics UI from a web browser. The Client-side network must have DHCP enabled and must be routable from the external hosts that access the file shares and File Analytics UI. References: Nutanix Files Administration Guide, page 93; Nutanix File Analytics Deployment Guide
An administrator is looking for a tool that includes these features:
• Permission Denials
• Top 5 Active Users
• Top 5 Accessed Files
• File Distribution by Type
Nutanix tool should the administrator choose?
File Server Manager
Prism Central
File Analytics
Files Console
The tool that includes these features is File Analytics. File Analytics is a feature that provides insights into the usage and activity of file data stored on Files. File Analytics consists of a File Analytics VM (FAVM) that runs on a Nutanix cluster and communicates with the File Server VMs (FSVMs) that host the file shares. File Analytics can display various reports and dashboards that include these features:
Permission Denials: This report shows the number of permission denied events for file operations, such as read, write, delete, etc., along with the user, file, share, and server details.
Top 5 Active Users: This dashboard shows the top five users who performed the most file operations in a given time period, along with the number and type of operations.
Top 5 Accessed Files: This dashboard shows the top five files that were accessed the most in a given time period, along with the number of accesses and the file details.
File Distribution by Type: This dashboard shows the distribution of files by their type or extension, such as PDF, DOCX, JPG, etc., along with the number and size of files for each type. References: Nutanix Files Administration Guide, page 93; Nutanix File Analytics User Guide
Before upgrading Files or creating a file server, which component must first be upgraded to a compatible version?
FSM
File Analytics
Prism Central
FSVM
The component that must first be upgraded to a compatible version before upgrading Files or creating a file server is Prism Central. Prism Central is a web-based user interface that allows administrators to manage multiple Nutanix clusters and services, including Files. Prism Central must be upgraded to a compatible version with Files before upgrading an existing file server or creating a new file server. Otherwise, the upgrade or creation process may fail or cause unexpected errors. References: Nutanix Files Administration Guide, page 21; Nutanix Files Upgrade Guide
What is the network requirement for a File Analytics deployment?
Must use the CVM network
Must use the Client-side network
Must use the Backplane network
Must use the Storage-side network
Nutanix File Analytics, part of Nutanix Unified Storage (NUS), is a tool for monitoring and analyzing file data within Nutanix Files deployments. It is deployed as a virtual machine (VM) on the Nutanix cluster and requires network connectivity to communicate with the File Server Virtual Machines (FSVMs) and other components.
Analysis of Options:
Option A (Must use the CVM network): Incorrect. The CVM (Controller VM) network is typically an internal network used for communication between CVMs and storage components (e.g., the Distributed Storage Fabric). File Analytics does not specifically require the CVM network; it needs to communicate with FSVMs over a network accessible to clients and management.
Option B (Must use the Client-side network): Correct. File Analytics requires connectivity to the FSVMs to collect and analyze file data. The Client-side network (also called the external network) is the network used by FSVMs for client communication (e.g., SMB, NFS) and management traffic. File Analytics must be deployed on this network to access the FSVMs, as well as to allow administrators to access its UI.
Option C (Must use the Backplane network): Incorrect. The Backplane network is an internal network used for high-speed communication between nodes in a Nutanix cluster (e.g., for data replication, cluster services). File Analytics does not use the Backplane network, as it needs to communicate externally with FSVMs and users.
Option D (Must use the Storage-side network): Incorrect. The Storage-side network is used for internal communication between FSVMs and the Nutanix cluster’s storage pool. File Analytics does not directly interact with the storage pool; it communicates with FSVMs over the Client-side network to collect analytics data.
Why Option B?
File Analytics needs to communicate with FSVMs to collect file metadata and user activity data, and it also needs to be accessible by administrators for monitoring. The Client-side network (used by FSVMs for client access and management) is the appropriate network for File Analytics deployment, as it ensures connectivity to the FSVMs and allows external access to the File Analytics UI.
Exact Extract from Nutanix Documentation:
From the Nutanix File Analytics Deployment Guide (available on the Nutanix Portal):
“File Analytics must be deployed on the Client-side network, which is the external network used by FSVMs for client communication (e.g., SMB, NFS) and management traffic. This ensures that File Analytics can communicate with the FSVMs to collect analytics data and that administrators can access the File Analytics UI.”
An administrator has discovered that File server services are down on a cluster.
Which service should the administrator investigation for this issue?
Minerva-nvm
Sys_stats_server
Cassandra
Insights_collector
The service that the administrator should investigate for this issue is Minerva-nvm. Minerva-nvm is a service that runs on each CVM and provides communication between Prism Central and Files services. Minerva-nvm also monitors the health of Files services and reports any failures or alerts to Prism Central. If Minerva-nvm is down on any CVM, it can affect the availability and functionality of Files services on that cluster. References: Nutanix Files Administration Guide, page 23; Nutanix Files Troubleshooting Guide
The minerva_nvm service is the core service on FSVMs that manages Nutanix Files operations. If File server services are down, this service is the most likely culprit, as it handles all file system activities (e.g., share access, data I/O). Investigating minerva_nvm (e.g., checking its status, logs, or restarting it) is the first step to diagnose and resolve the issue.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“The minerva_nvm service is a critical component of Nutanix Files, running on each FSVM. It manages file system operations, including share access and data management. If File server services are down on a cluster, investigate the minerva_nvm service on the FSVMs, as its failure will cause shares to become inaccessible.”
An administrator is upgrading Files from version 3.7 to 4.1 in a highly secured environment. The pre-upgrade check fails with the following error:
"FileServer preupgrade check failed with cause(s) Sub task poll timed out"
What initial troubleshooting step should the administrator take?
Increase upgrades timeout from ecli.
Check there is enough disk space on FSVMs.
Examine the failed tasks on the FSVMs.
Verify connectivity between the FSVMs.
Nutanix Files, part of Nutanix Unified Storage (NUS), requires pre-upgrade checks to ensure a successful upgrade (e.g., from version 3.7 to 4.1). The error “Sub task poll timed out” indicates that a subtask during the pre-upgrade check did not complete within the expected time, likely due to communication or resource issues among the File Server Virtual Machines (FSVMs).
Analysis of Options:
Option A (Increase upgrades timeout from ecli): Incorrect. The ecli (Entity CLI) is not a standard Nutanix command-line tool for managing upgrades, and “upgrades timeout” is not a configurable parameter in this context. While timeouts can sometimes be adjusted, this is not the initial troubleshooting step, and the error suggests a deeper issue (e.g., communication failure) rather than a timeout setting.
Option B (Check there is enough disk space on FSVMs): Incorrect. While insufficient disk space on FSVMs can cause upgrade issues (e.g., during the upgrade process itself), the “Sub task poll timed out” error during pre-upgrade checks is more likely related to communication or task execution issues between FSVMs, not disk space. Disk space checks are typically part of the pre-upgrade validation, and a separate error would be logged if space was the issue.
Option C (Examine the failed tasks on the FSVMs): Incorrect. Examining failed tasks on the FSVMs (e.g., by checking logs) is a valid troubleshooting step, but it is not the initial step. The “Sub task poll timed out” error suggests a communication issue, so verifying connectivity should come first. Once connectivity is confirmed, examining logs for specific task failures would be a logical next step.
Option D (Verify connectivity between the FSVMs): Correct. The “Sub task poll timed out” error indicates that the pre-upgrade check could not complete a subtask, likely because FSVMs were unable to communicate with each other or with the cluster. Nutanix Files upgrades require FSVMs to coordinate tasks, and this coordination depends on network connectivity (e.g., over the Storage and Client networks). Verifying connectivity between FSVMs (e.g., checking network status, VLAN configuration, or firewall rules in a highly secured environment) is the initial troubleshooting step to identify and resolve the root cause of the timeout.
Why Option D?
In a highly secured environment, network restrictions (e.g., firewalls, VLAN misconfigurations) are common causes of communication issues between FSVMs. The “Sub task poll timed out” error suggests that the pre-upgrade check failed because a task could not complete, likely due to FSVMs being unable to communicate. Verifying connectivity between FSVMs is the first step to diagnose and resolve this issue, ensuring that subsequent pre-upgrade checks can proceed.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“If the pre-upgrade check fails with a ‘Sub task poll timed out’ error, this typically indicates a communication issue between FSVMs. As an initial troubleshooting step, verify connectivity between the FSVMs, ensuring that the Storage and Client networks are properly configured and that there are no network restrictions (e.g., firewalls) preventing communication.”
An administrator needs to configure Files to forward logs to a syslog server. How could the administrator complete this task?
Configure the syslog in Prism Element.
Configure the syslog in Files Console.
Use the CLI in an FSVM.
Use the CLI in a CVM.
Nutanix Files, part of Nutanix Unified Storage (NUS), generates logs for file service operations, which can be forwarded to a syslog server for centralized logging and monitoring. The process to configure syslog forwarding for Nutanix Files involves interacting with the File Server Virtual Machines (FSVMs), as they handle the file services and generate the relevant logs.
Analysis of Options:
Option A (Configure the syslog in Prism Element): Incorrect. Prism Element manages cluster-level settings, such as storage and VM configurations, but it does not provide a direct interface to configure syslog forwarding for Nutanix Files. Syslog configuration for Files is specific to the FSVMs.
Option B (Configure the syslog in Files Console): Incorrect. The Files Console (accessible via Prism Central) is used for managing Files shares, FSVMs, and policies, but it does not have a built-in option to configure syslog forwarding. Syslog configuration requires direct interaction with the FSVMs.
Option C (Use the CLI in an FSVM): Correct. Nutanix Files logs are managed at the FSVM level, and syslog forwarding can be configured by logging into an FSVM and using the command-line interface (CLI) to set up the syslog server details. This is the standard method documented by Nutanix for enabling syslog forwarding for Files.
Option D (Use the CLI in a CVM): Incorrect. The Controller VM (CVM) manages the Nutanix cluster’s storage and services, but it does not handle Files-specific logging. Syslog configuration for Files must be done on the FSVMs, not the CVMs.
Configuration Process:
To configure syslog forwarding, the administrator would:
SSH into one of the FSVMs in the Files deployment.
Use the nutanix user account to access the FSVM CLI.
Run commands to configure the syslog server (e.g., modify the /etc/syslog.conf file or use Nutanix-specific commands to set the syslog server IP and port).
Restart the syslog service on the FSVM to apply the changes.This process ensures that Files logs are forwarded to the specified syslog server.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“To forward Nutanix Files logs to a syslog server, you must configure syslog settings on each FSVM. Log in to an FSVM using SSH and the ‘nutanix’ user account. Use the CLI to update the syslog configuration by specifying the syslog server’s IP address and port. After configuration, restart the syslog service to apply the changes.”
An administrator is trying to create a Distributed Share, but the Use Distributed Share/Export type instead of Standard option is not present when creating the share.
What is most likely the cause for this?
The file server does not have the correct license
The cluster only has three nodes.
The file server resides on a single node cluster.
The cluster is configured with hybrid storage
The most likely cause for this issue is that the file server resides on a single node cluster. A distributed share is a type of SMB share or NFS export that distributes the hosting of top-level directories across multiple FSVMs, which improves load balancing and performance. A distributed share cannot be created on a single node cluster, because there is only one FSVM available. A distributed share requires at least two nodes in the cluster to distribute the directories. Therefore, the option to use distributed share/export type instead of standard is not present when creating a share on a single node cluster. References: Nutanix Files Administration Guide, page 33; Nutanix Files Solution Guide, page 8
A single-node cluster cannot support a Distributed Share because it can only host one FSVM, whereas Distributed Shares require at least three FSVMs for distribution and high availability. This limitation causes the “Use Distributed Share/Export type instead of Standard” option to be absent when creating a share, as the cluster does not meet the minimum requirements.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Distributed Shares require a minimum of three FSVMs to ensure scalability and high availability, which typically requires a cluster with at least three nodes. On a single-node cluster, only Standard Shares are supported, and the option to create a Distributed Share will not be available in the Files Console.”
An administrator sees that the Cluster drop-down or the Subnets drop-down shows empty lists or an error message when no Prism Element clusters or subnets are available for deployment, respectively. Additionally, the administrator sees that no Prism Element clusters are listed during the addition of multi-cluster to the Object Store. What would cause the Prism Element clusters or subnets to not appear in the user interface?
The logged-in user does not have access to any Prism Central.
The logged-in user does not have access to any subnets on the allowed Prism Central.
The administrator has just created an access policy granting user access to Prism Element.
The administrator has just created an access policy denying user access to a subnet in Prism Element.
Nutanix Objects, part of Nutanix Unified Storage (NUS), is deployed and managed through Prism Central (PC), which provides a centralized interface for managing multiple Prism Element (PE) clusters. When deploying Objects or adding multi-cluster support to an Object Store, the administrator selects a PE cluster and associated subnets from drop-down lists in the Prism Central UI. If these drop-down lists are empty or show an error, it indicates an issue with visibility or access to the clusters or subnets.
Analysis of Options:
Option A (The logged-in user does not have access to any Prism Central): Correct. Prism Central is required to manage Nutanix Objects deployments and multi-cluster configurations. If the logged-in user does not have access to any Prism Central instance (e.g., due to RBAC restrictions or no PC being deployed), they cannot see any PE clusters or subnets in the UI, as Prism Central is the interface that aggregates this information. This would result in empty drop-down lists for clusters and subnets, as well as during multi-cluster addition for the Object Store.
Option B (The logged-in user does not have access to any subnets on the allowed Prism Central): Incorrect. While subnet access restrictions could prevent subnets from appearing in the Subnets drop-down, this does not explain why the Cluster drop-down is empty or why no clusters are listed during multi-cluster addition. The issue is broader—likely related to Prism Central access itself—rather than subnet-specific permissions.
Option C (The administrator has just created an access policy granting user access to Prism Element): Incorrect. Granting access to Prism Element directly does not affect visibility in Prism Central’s UI. Objects deployment and multi-cluster management are performed through Prism Central, not Prism Element. Even if the user has PE access, they need PC access to see clusters and subnets in the Objects deployment workflow.
Option D (The administrator has just created an access policy denying user access to a subnet in Prism Element): Incorrect. Denying access to a subnet in Prism Element might affect subnet visibility in the Subnets drop-down, but it does not explain the empty Cluster drop-down or the inability to see clusters during multi-cluster addition. Subnet access policies are secondary to the broader issue of Prism Central access.
Why Option A?
The core issue is that Prism Central is required to display PE clusters and subnets in the UI for Objects deployment and multi-cluster management. If the logged-in user does not have access to any Prism Central instance (e.g., they are not assigned the necessary role, such as Prism Central Admin, or no PC is registered), the UI cannot display any clusters or subnets, resulting in empty drop-down lists. This also explains why no clusters are listed during multi-cluster addition for the Object Store, as Prism Central is the central management point for such operations.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Deployment Guide (available on the Nutanix Portal):
“Nutanix Objects deployment and multi-cluster management are performed through Prism Central. The logged-in user must have access to Prism Central with appropriate permissions (e.g., Prism Central Admin role) to view Prism Element clusters and subnets in the deployment UI. If the user does not have access to Prism Central, the Cluster and Subnets drop-down lists will be empty, and multi-cluster addition will fail to list available clusters.”
A company uses Linux and Windows workstations. The administrator is evaluating solution for their file storage needs.
The solution should support these requirements:
• Distributed File System
• Active Directory integrated
• Scale out architecture
Mine
Objects
Volumes
Files
The solution that meets the company’s requirements for their file storage needs is Files. Files is a feature that allows users to create and manage file server instances (FSIs) on a Nutanix cluster. FSIs can provide SMB and NFS access to file shares and exports for different types of clients. Files supports these requirements:
Distributed File System: Files uses a distributed file system that spans across multiple FSVMs (File Server VMs), which improves scalability, performance, and availability.
Active Directory integrated: Files can integrate with Active Directory for authentication and authorization of SMB clients and multiprotocol NFS clients.
Scale out architecture: Files can scale out by adding more FSVMs to an existing FSI or creating new FSIs on the same or different clusters. References: Nutanix Files Administration Guide, page 27; Nutanix Files Solution Guide, page 6
An administrator has performed an AOS upgrade, but noticed that the compression on containers is not happening. What is the delay before compression begins on the Files container?
30 minutes
60 minutes
12 hours
24 hours
Nutanix Files, part of Nutanix Unified Storage (NUS), stores its data in containers managed by the Nutanix Acropolis Operating System (AOS). AOS supports data compression to optimize storage usage, which can be applied to Files containers. After an AOS upgrade, compression settings may take effect after a delay, as the system needs to stabilize and apply the new configuration.
Analysis of Options:
Option A (30 minutes): Incorrect. A 30-minute delay is too short for AOS to stabilize and initiate compression after an upgrade. Compression is a background process that typically requires a longer delay to ensure system stability.
Option B (60 minutes): Correct. According to Nutanix documentation, after an AOS upgrade, there is a default delay of 60 minutes before compression begins on containers, including those used by Nutanix Files. This delay allows the system to complete post-upgrade tasks (e.g., metadata updates, cluster stabilization) before initiating resource-intensive operations like compression.
Option C (12 hours): Incorrect. A 12-hour delay is excessive for compression to start. While some AOS processes (e.g., data deduplication) may have longer delays, compression typically begins sooner to optimize storage usage.
Option D (24 hours): Incorrect. A 24-hour delay is also too long for compression to start. Nutanix aims to apply compression relatively quickly after the system stabilizes, and 60 minutes is the documented delay for this process.
Why Option B?
After an AOS upgrade, compression on containers (including Files containers) is delayed by 60 minutes to allow the cluster to stabilize and complete post-upgrade tasks. This ensures that compression does not interfere with critical operations immediately following the upgrade, balancing system performance and storage optimization.
Exact Extract from Nutanix Documentation:
From the Nutanix AOS Administration Guide (available on the Nutanix Portal):
“After an AOS upgrade, compression on containers, including those used by Nutanix Files, is delayed by 60 minutes. This delay allows the cluster to stabilize and complete post-upgrade tasks before initiating compression, ensuring system reliability.”
An administrator is tasked with creating an Objects store with the following settings:
• Medium Performance (around 10,000 requests per second)
• 10 TiB capacity
• Versioning disabled
• Hosted on an AHV cluster
immediately after creation, the administrator is asked to change the name of Objects store
Who will the administrator achieve this request?
Enable versioning and then rename the Object store, disable versioning
The Objects store can only be renamed if hosted on ESXI.
Delete and recreate a new Objects store with the updated name
The administrator can achieve this request by deleting and recreating a new Objects store with the updated name. Objects is a feature that allows users to create and manage object storage clusters on a Nutanix cluster. Objects clusters can provide S3-compatible access to buckets and objects for various applications and users. Objects clusters can be created and configured in Prism Central. However, once an Objects cluster is created, its name cannot be changed or edited. Therefore, the only way to change the name of an Objects cluster is to delete the existing cluster and create a new cluster with the updated name. References: Nutanix Objects User Guide, page 9; Nutanix Objects Solution Guide, page 8
Which two statements are true about HA for a file server? (Choose two.)
Files reassigns the IP address of the FSVM to another FSVM.
Shares availability are not impacted for several minutes.
Multiple FSVMs can share a single host.
Affinity rules affect HA.
Nutanix Files, part of Nutanix Unified Storage (NUS), uses File Server Virtual Machines (FSVMs) to manage file services. High Availability (HA) in Nutanix Files ensures that shares remain accessible even if an FSVM or host fails. HA mechanisms include IP reassignment, FSVM distribution, and integration with hypervisor HA features.
Analysis of Options:
Option A (Files reassigns the IP address of the FSVM to another FSVM): Correct. In a Nutanix Files HA scenario, if an FSVM fails (e.g., due to a host failure), the IP address of the failed FSVM is reassigned to another FSVM in the file server. This ensures that clients can continue accessing shares without disruption, as the share’s endpoint (IP address) remains the same, even though the backend FSVM handling the request has changed.
Option B (Shares availability are not impacted for several minutes): Incorrect. While Nutanix Files HA minimizes downtime, there is typically a brief disruption (seconds to a minute) during an FSVM failure as the IP address is reassigned and the new FSVM takes over. The statement “not impacted for several minutes” implies a longer acceptable downtime, which is not accurate—HA aims to restore availability quickly, typically within a minute.
Option C (Multiple FSVMs can share a single host): Incorrect. Nutanix Files HA requires that FSVMs are distributed across different hosts to ensure fault tolerance. By default, one FSVM runs per host, and Nutanix uses anti-affinity rules to prevent multiple FSVMs from residing on the same host. This ensures that a single host failure does not impact multiple FSVMs, which would defeat the purpose of HA.
Option D (Affinity rules affect HA): Correct. Nutanix Files leverages hypervisor HA features (e.g., AHV HA) and uses affinity/anti-affinity rules to manage FSVM placement. Anti-affinity rules ensure that FSVMs are placed on different hosts, which is critical for HA—if multiple FSVMs were on the same host, a host failure would impact multiple FSVMs, reducing availability. These rules directly affect how HA functions in a Files deployment.
Selected Statements:
A: IP reassignment is a core HA mechanism in Nutanix Files to maintain share accessibility during FSVM failures.
D: Affinity (specifically anti-affinity) rules ensure FSVM distribution across hosts, which is essential for effective HA.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“High Availability (HA) in Nutanix Files ensures continuous share access during failures. If an FSVM fails, its IP address is reassigned to another FSVM in the file server to maintain client connectivity. Nutanix Files uses anti-affinity rules to distribute FSVMs across different hosts, ensuring that a single host failure does not impact multiple FSVMs, which is critical for HA.”
An organization currently has two Objects instances deployed between two sites. Both instances are managed via manage the same Prism Central to simplify management.
The organization has a critical application with all data in a bucket that needs to be replicated to the secondary site for DR purposes. The replication needs to be asynchronous, including al delete the marker versions.
Create a Bucket replication rule, set the destination Objects instances.
With Object Browser, upload the data at the destination site.
Leverage the Objects Baseline Replication Tool from a Linus VM
Use a protection Domain to replicate the objects Volume Group.
The administrator can achieve this requirement by creating a bucket replication rule and setting the destination Objects instance. Bucket replication is a feature that allows administrators to replicate data from one bucket to another bucket on a different Objects instance for disaster recovery or data migration purposes. Bucket replication can be configured with various parameters, such as replication mode, replication frequency, replication status, etc. Bucket replication can also replicate all versions of objects, including delete markers, which are special versions that indicate that an object has been deleted. By creating a bucket replication rule and setting the destination Objects instance, the administrator can replicate data from one Objects instance to another asynchronously, including all delete markers and versions. References: Nutanix Objects User Guide, page 19; Nutanix Objects Solution Guide, page 9
Nutanix Objects, part of Nutanix Unified Storage (NUS), supports replication of buckets between Object Store instances for disaster recovery (DR). The organization has two Objects instances across two sites, managed by the same Prism Central, and needs to replicate a bucket’s data asynchronously, including delete marker versions, to the secondary site.
Analysis of Options:
Option A (With Object Browser, upload the data at the destination site): Incorrect. The Object Browser is a UI tool in Nutanix Objects for managing buckets and objects, but it is not designed for replication. Manually uploading data to the destination site does not satisfy the requirement for asynchronous replication, nor does it handle delete marker versions automatically.
Option B (Leverage the Objects Baseline Replication Tool from a Linux VM): Incorrect. The Objects Baseline Replication Tool is not a standard feature in Nutanix Objects documentation. While third-party tools or scripts might be used for manual replication, Nutanix provides a native solution for bucket replication, making this option unnecessary and incorrect for satisfying the requirement.
Option C (Use a Protection Domain to replicate the Objects Volume Group): Incorrect. Protection Domains are used in Nutanix for protecting VMs and Volume Groups (block storage) via replication, but they do not apply to Nutanix Objects. Objects uses bucket replication rules for DR, not Protection Domains.
Option D (Create a Bucket replication rule, set the destination Objects instance): Correct. Nutanix Objects supports bucket replication rules to replicate data between Object Store instances asynchronously. This feature allows the organization to replicate the bucket to the secondary site, including all versions (such as delete marker versions), as required. The replication rule can be configured in Prism Central, specifying the destination Object Store instance, and it supports asynchronous replication for DR purposes.
Why Option D?
Bucket replication in Nutanix Objects is the native mechanism for asynchronous replication between Object Store instances. It supports replicating all versions of objects, including delete marker versions (which indicate deleted objects in a versioned bucket), ensuring that the secondary site has a complete replica of the bucket for DR. Since both Object Store instances are managed by the same Prism Central, the administrator can easily create a replication rule to meet the requirement.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“Nutanix Objects supports asynchronous bucket replication for disaster recovery. To replicate a bucket to a secondary site, create a bucket replication rule in Prism Central, specifying the destination Object Store instance. The replication rule can be configured to include all versions, including delete marker versions, ensuring that the secondary site maintains a complete replica of the bucket for DR purposes.”
An administrator has received reports of resource issues on a file server. The administrator needs to review the following graphs, as displayed in the exhibit:
Storage Used
Open Connections
Number of Files
Top Shares by Current Capacity
Top Shares by Current ConnectionsWhere should the administrator complete this action?
Files Console Shares View
Files Console Monitoring View
Files Console Data Management View
Files Console Dashboard View
Nutanix Files, part of Nutanix Unified Storage (NUS), provides a management interface called the Files Console, accessible via Prism Central. The administrator needs to review graphs related to resource usage on a file server, including Storage Used, Open Connections, Number of Files, Top Shares by Current Capacity, and Top Shares by Current Connections. These graphs provide insights into the file server’s performance and resource utilization, helping diagnose reported resource issues.
Analysis of Options:
Option A (Files Console Shares View): Incorrect. The Shares View in the Files Console displays details about individual shares (e.g., capacity, permissions, quotas), but it does not provide high-level graphs like Storage Used, Open Connections, or Top Shares by Current Capacity/Connections. It focuses on share-specific settings, not overall file server metrics.
Option B (Files Console Monitoring View): Incorrect. While “Monitoring View” sounds plausible, there is no specific “Monitoring View” tab in the Files Console. Monitoring-related data (e.g., graphs, metrics) is typically presented in the Dashboard View, not a separate Monitoring View.
Option C (Files Console Data Management View): Incorrect. There is no “Data Management View” in the Files Console. Data management tasks (e.g., Smart Tiering, as in Question 58) are handled in other sections, but graphs like Storage Used and Top Shares are not part of a dedicated Data Management View.
Option D (Files Console Dashboard View): Correct. The Dashboard View in the Files Console provides an overview of the file server’s performance and resource usage through various graphs and metrics. It includes graphs such as Storage Used (total storage consumption), Open Connections (active client connections), Number of Files (total files across shares), Top Shares by Current Capacity (shares consuming the most storage), and Top Shares by Current Connections (shares with the most active connections). This view is designed to help administrators monitor and troubleshoot resource issues, making it the correct location for reviewing these graphs.
Why Option D?
The Files Console Dashboard View is the central location for monitoring file server metrics through graphs like Storage Used, Open Connections, Number of Files, and Top Shares by Capacity/Connections. These graphs provide a high-level overview of resource utilization, allowing the administrator to diagnose reported resource issues effectively.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“The Files Console Dashboard View provides an overview of file server performance and resource usage through graphs, including Storage Used, Open Connections, Number of Files, Top Shares by Current Capacity, and Top Shares by Current Connections. Use the Dashboard View to monitor and troubleshoot resource issues on the file server.”
A company is currently using Objects 3.2 with a single Object Store and a single S3 bucket that was created as a repository for their data protection (backup) application. In the near future, additional S3 buckets will be created as this was requested by their DevOps team. After facing several issues when writing backup images to the S3 bucket, the vendor of the data protection solution found the issue to be a compatibility issue with the S3 protocol. The proposed solution is to use an NFS repository instead of the S3 bucket as backup is a critical service, and this issue was unknown to the backup software vendor with no foreseeable date to solve this compatibility issue. What is the fastest solution that requires the least consumption of compute capacity (CPU and memory) of their Nutanix infrastructure?
Delete the existing bucket, create a new bucket, and enable NFS v3 access.
Deploy Files and create a new Share with multi-protocol access enabled.
Redeploy Objects using the latest version, create a new bucket, and enable NFS v3 access.
Upgrade Objects to the latest version, create a new bucket, and enable NFS v3 access.
The company is using Nutanix Objects 3.2, a component of Nutanix Unified Storage (NUS), which provides S3-compatible object storage. Due to an S3 protocol compatibility issue with their backup application, they need to switch to an NFS repository. The solution must be the fastest and consume the least compute capacity (CPU and memory) on their Nutanix infrastructure.
Analysis of Options:
Option A (Delete the existing bucket, create a new bucket, and enable NFS v3 access): Incorrect. Nutanix Objects does support NFS access for buckets starting with version 3.5 (as per Nutanix documentation), but Objects 3.2 does not have this capability. Since the company is using Objects 3.2, this option is not feasible without upgrading or redeploying Objects, which is not mentioned in this option. Even if NFS were supported, deleting and recreating buckets does not address the compatibility issue directly and may still consume compute resources for bucket operations.
Option B (Deploy Files and create a new Share with multi-protocol access enabled): Correct. Nutanix Files, another component of NUS, supports NFS natively and can be deployed to create an NFS share quickly. Multi-protocol access (e.g., NFS and SMB) can be enabled on a Files share, allowing the backup application to use NFS as a repository. Deploying a Files instance with a minimal configuration (e.g., 3 FSVMs) consumes relatively low compute resources compared to redeploying or upgrading Objects, and it is the fastest way to provide an NFS repository without modifying the existing Objects deployment.
Option C (Redeploy Objects using the latest version, create a new bucket, and enable NFS v3 access): Incorrect. Redeploying Objects with the latest version (e.g., 4.0 or later) would allow NFS v3 access, as this feature was introduced in Objects 3.5. However, redeployment is a time-consuming process that involves uninstalling the existing Object Store, redeploying a new instance, and reconfiguring buckets. This also consumes significant compute resources during the redeployment process, making it neither the fastest nor the least resource-intensive solution.
Option D (Upgrade Objects to the latest version, create a new bucket, and enable NFS v3 access): Incorrect. Upgrading Objects from 3.2 to a version that supports NFS (e.g., 3.5 or later) is a viable solution, as it would allow enabling NFS v3 access on a new bucket. However, upgrading Objects involves downtime, validation, and potential resource overhead during the upgrade process, which does not align with the requirement for the fastest solution with minimal compute capacity usage.
Why Option B is the Fastest and Least Resource-Intensive:
Nutanix Files Deployment: Deploying a new Nutanix Files instance is a straightforward process that can be completed in minutes via Prism Central or the Files Console. A minimal Files deployment (e.g., 3 FSVMs) requires 4 vCPUs and 12 GiB of RAM per FSVM (as noted in Question 2), totaling 12 vCPUs and 36 GiB of RAM. This is a relatively low resource footprint compared to redeploying or upgrading an Objects instance, which may require more compute resources during the process.
NFS Support: Nutanix Files natively supports NFS, and enabling multi-protocol access (NFS and SMB) on a share is a simple configuration step that does not require modifying the existing Objects deployment.
Speed: Deploying Files and creating a share can be done without downtime to the existing Objects setup, making it faster than upgrading or redeploying Objects.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Deployment Guide (available on the Nutanix Portal):
“Nutanix Files supports multi-protocol access, allowing shares to be accessed via both NFS and SMB protocols. To enable NFS access, deploy a Files instance and create a share with multi-protocol access enabled. A minimal Files deployment requires 3 FSVMs, each with 4 vCPUs and 12 GiB of RAM, ensuring efficient resource usage.”
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“Starting with Objects 3.5, NFS v3 access is supported for buckets, allowing them to be mounted as NFS file systems. This feature is not available in earlier versions, such as Objects 3.2.”
An administrator is able to review and modify objects in a registered ESXI cluster from a PE instance, but when the administrator attempts to deploy an Objects cluster to the same ESXi cluster, the error that is shown in the exhibit is shown.
What is the appropriate configuration to verify to allow successful Objects cluster deployment to this ESXi cluster?
Ensure that vCenter in PE cluster is registered using FQDN and that vCenter details in Objects UI are using FQDN.
Replace the expired self-signed SSL certificate for the Object Store with a non-expired ' signed by a valid Certificate Authority.
Replace the expired self-signed SSL certificate for the Object Store with a non-expired self signed SSL certificate.
Ensure that vCenter in PE cluster is registered using FQDN and that vCenter details in Objects UI are using IP address.
The appropriate configuration to verify to allow successful Objects cluster deployment to this ESXi cluster is to ensure that vCenter in PE cluster is registered using FQDN (Fully Qualified Domain Name) and that vCenter details in Objects UI are using FQDN. FQDN is a domain name that specifies the exact location of a host in the domain hierarchy. For example, esxi01.nutanix.com is an FQDN for an ESXi host. Using FQDN instead of IP addresses can avoid certificate validation errors when deploying Objects clusters to ESXi clusters. References: Nutanix Objects User Guide, page 9; Nutanix Objects Troubleshooting Guide, page 5
What process is initiated when a share is protected for the first time?
Share data movement is started to the recovery site.
A remote snapshot is created for the share.
The share is created on the recovery site with a similar configuration.
A local snapshot is created for the share.
Nutanix Files, part of Nutanix Unified Storage (NUS), supports data protection for shares through mechanisms like replication and snapshots. When a share is “protected for the first time,” this typically refers to enabling a protection mechanism, such as a replication policy (e.g., NearSync, as seen in Question 24) or a snapshot schedule, to ensure the share’s data can be recovered in case of failure.
Analysis of Options:
Option A (Share data movement is started to the recovery site): Incorrect. While data movement to a recovery site occurs during replication (e.g., with NearSync), this is not the first step when a share is protected. Before data can be replicated, a baseline snapshot is typically created to capture the share’s initial state. Data movement follows the snapshot creation, not as the first step.
Option B (A remote snapshot is created for the share): Incorrect. A remote snapshot implies that a snapshot is created directly on the recovery site, which is not how Nutanix Files protection works initially. The first step is to create a local snapshot on the primary site, which is then replicated to the remote site as part of the protection process (e.g., via NearSync).
Option C (The share is created on the recovery site with a similar configuration): Incorrect. While this step may occur during replication setup (e.g., the remote site’s file server is configured to host a read-only copy of the share, as seen in the exhibit for Question 24), it is not the first process initiated. The share on the recovery site is created as part of the replication process, which begins after a local snapshot is taken.
Option D (A local snapshot is created for the share): Correct. When a share is protected for the first time (e.g., by enabling a snapshot schedule or replication policy), the initial step is to create a local snapshot of the share on the primary site. This snapshot captures the share’s current state and serves as the baseline for protection mechanisms like replication or recovery. For example, in a NearSync setup, a local snapshot is taken, and then the snapshot data is replicated to the remote site.
Why Option D?
Protecting a share in Nutanix Files typically involves snapshots as the foundation for data protection. The first step is to create a local snapshot of the share on the primary site, which captures the share’s data and metadata. This snapshot can then be used for local recovery (e.g., via Self-Service Restore) or replicated to a remote site for DR (e.g., via NearSync). The question focuses on the initial process, making the creation of a local snapshot the correct answer.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“When a share is protected for the first time, whether through a snapshot schedule or a replication policy, the initial step is to create a local snapshot of the share on the primary site. This snapshot captures the share’s current state and serves as the baseline for subsequent protection operations, such as replication to a remote site or local recovery.”
An administrator has been tasked with updating the cool-off interval of an existing WORM share from the default value to five minutes. How should the administrator complete this task?
Delete and re-create the WORM share.
Update the worm_cooloff_interval parameter using CLI.
Contact support to update the WORM share.
Use FSM to update the worm_cooloff_interval parameter.
Nutanix Files, part of Nutanix Unified Storage (NUS), supports WORM (Write Once, Read Many) shares to enforce immutability for compliance and data retention. A WORM share prevents files from being modified or deleted for a specified retention period. The “cool-off interval” (or cool-off period) is the time after a file is written to a WORM share during which it can still be modified or deleted before becoming immutable. The default cool-off interval is typically 1 minute, and the administrator wants to update it to 5 minutes.
Analysis of Options:
Option A (Delete and re-create the WORM share): Incorrect. Deleting and re-creating the WORM share would remove the existing share and its data, which is disruptive and unnecessary. The cool-off interval can be updated without deleting the share, making this an inefficient and incorrect approach.
Option B (Update the worm_cooloff_interval parameter using CLI): Correct. The worm_cooloff_interval parameter controls the cool-off period for WORM shares in Nutanix Files. This parameter can be updated using the Nutanix CLI (e.g., ncli or afs commands) on the file server. The administrator can log into an FSVM, use the CLI to set the worm_cooloff_interval to 5 minutes (300 seconds), and apply the change without disrupting the share. This is the most direct and efficient method to update the cool-off interval.
Option C (Contact support to update the WORM share): Incorrect. Contacting Nutanix support is unnecessary for this task, as updating the cool-off interval is a standard administrative action that can be performed using the CLI. Support is typically needed for complex issues, not for configurable parameters like this.
Option D (Use FSM to update the worm_cooloff_interval parameter): Incorrect. FSM (File Server Manager) is not a standard Nutanix tool or interface for managing Files configurations. The correct method is to use the CLI (option B) to update the worm_cooloff_interval parameter. While the Files Console (FSM-like interface) can manage some share settings, the cool-off interval requires CLI access.
Why Option B?
The worm_cooloff_interval parameter is a configurable setting in Nutanix Files that controls the cool-off period for WORM shares. Updating this parameter via the CLI (e.g., using ncli or afs commands on an FSVM) allows the administrator to change the cool-off interval from the default (1 minute) to 5 minutes without disrupting the existing share. This is the recommended and most efficient method per Nutanix documentation.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“The cool-off interval for a WORM share, which determines the time after a file is written during which it can still be modified, is controlled by the worm_cooloff_interval parameter. To update this interval, use the CLI on an FSVM to set the parameter (e.g., to 300 seconds for 5 minutes) using commands like ncli or afs, then apply the change.”
Refer to the exhibit.
What does the ‘’X’’ represent on the icon?
Share Disconnected File
Corrupt ISO
Distributed shared file
Tiered File
The “X” on the icon represents a distributed shared file, which is a file that belongs to a distributed share or export. A distributed share or export is a type of SMB share or NFS export that distributes the hosting of top-level directories across multiple FSVMs. The “X” indicates that the file is not hosted by the current FSVM, but by another FSVM in the cluster. The “X” also helps to identify which files are eligible for migration when using the Nutanix Files Migration Tool. References: Nutanix Files Administration Guide, page 34; Nutanix Files Migration Tool User Guide, page 10
Which confirmation is required for an Objects deployment?
Configure Domain Controllers on both Prism Element and Prism Central.
Configure VPC on both Prism Element and Prism Central.
Configure a dedicated storage container on Prism Element or Prism Cent
Configure NTP servers on both Prism Element and Prism Central.
The configuration that is required for an Objects deployment is to configure NTP servers on both Prism Element and Prism Central. NTP (Network Time Protocol) is a protocol that synchronizes the clocks of devices on a network with a reliable time source. NTP servers are devices that provide accurate time information to other devices on a network. Configuring NTP servers on both Prism Element and Prism Central is required for an Objects deployment, because it ensures that the time settings are consistent and accurate across the Nutanix cluster and the Objects cluster, which can prevent any synchronization issues or errors. References: Nutanix Objects User Guide, page 9; Nutanix Objects Deployment Guide
ionization deployed Files in multiple sites, including different geographical locations across the globe. The organization has the following requirements to improves their data management lifecycle:
• Provide a centralized management solution.
• Automate archiving tier policies for compliance purposes.
• Protect the data against ransomware.
Which solution will satisfy the organization's requirements?
Prims Central
Data Lens
Files Analytics
Data Lens can provide a centralized management solution for Files deployments in multiple sites, including different geographical locations. Data Lens can also automate archiving tier policies for compliance purposes, by allowing administrators to create policies based on file attributes, such as age, size, type, or owner, and move files to a lower-cost tier or delete them after a specified period. Data Lens can also protect the data against ransomware, by allowing administrators to block malicious file signatures from being written to the file system. References: Nutanix Data Lens Administration Guide
How many configure snapshots are supported for SSR in a file server?
25
50
100
200
The number of configurable snapshots that are supported for SSR in a file server is 200. SSR (Snapshot-based Replication) is a feature that allows administrators to replicate snapshots of shares or exports from one file server to another file server on a different cluster or site for disaster recovery purposes. SSR can be configured with various parameters, such as replication frequency, replication status, replication mode, etc. SSR supports up to 200 configurable snapshots per share or export in a file server. References: Nutanix Files Administration Guide, page 81; Nutanix Files Solution Guide, page 9
An organization is implementing their first Nutanix cluster. In addition to hosting VMs, the cluster will be providing block storage services to existing physical servers, as well as CIFS shares and NFS exports to the end users. Security policies dictate that separate networks are used for different functions, which are already configured as:
Management - VLAN 500 - 10.10.50.0/24
iSCSI access - VLAN 510 - 10.10.51.0/24
Files access - VLAN 520 - 10.10.52.0/24How should the administrator configure the cluster to ensure the CIFS and NFS traffic is on the correct network and accessible by the end users?
Create a new subnet in Network Configuration, assign it VLAN 520, and configure the Files client network on it.
Configure the Data Services IP in Prism Element with an IP on VLAN 520.
Create a new virtual switch in Network Configuration, assign it VLAN 520, and configure the Files client network on it.
Configure the Data Services IP in Prism Central with an IP on VLAN 520.
The organization is deploying a Nutanix cluster to provide block storage (via iSCSI), CIFS shares, and NFS exports (via Nutanix Files). Nutanix Files, part of Nutanix Unified Storage (NUS), uses File Server Virtual Machines (FSVMs) to serve CIFS (SMB) and NFS shares to end users. The security policy requires separate networks:
Management traffic on VLAN 500 (10.10.50.0/24).
iSCSI traffic on VLAN 510 (10.10.51.0/24).
Files traffic on VLAN 520 (10.10.52.0/24).
To ensure CIFS and NFS traffic uses VLAN 520 and is accessible by end users, the cluster must be configured to route Files traffic over the correct network.
Analysis of Options:
Option A (Create a new subnet in Network Configuration, assign it VLAN 520, and configure the Files client network on it): Correct. Nutanix Files requires two networks: a Client network (for CIFS/NFS traffic to end users) and a Storage network (for internal communication with the cluster’s storage pool). To isolate Files traffic on VLAN 520, the administrator should create a new subnet in the cluster’s Network Configuration (via Prism Element), assign it to VLAN 520, and then configure the Files instance to use this subnet as its Client network. This ensures that CIFS and NFS traffic is routed over VLAN 520, making the shares accessible to end users on that network.
Option B (Configure the Data Services IP in Prism Element with an IP on VLAN 520): Incorrect. The Data Services IP is used for iSCSI traffic (as seen in Question 25, where it was configured for VLAN 510). It is not used for CIFS or NFS traffic, which is handled by Nutanix Files. Configuring the Data Services IP on VLAN 520 would incorrectly route iSCSI traffic, not Files traffic.
Option C (Create a new virtual switch in Network Configuration, assign it VLAN 520, and configure the Files client network on it): Incorrect. A virtual switch is used for VM networking (e.g., for AHV VMs), but Nutanix Files traffic is handled by FSVMs, which use the cluster’s network configuration for external communication. While FSVMs are VMs, their network configuration is managed at the Files instance level by specifying the Client network, not by creating a new virtual switch. The correct approach is to configure the subnet for the Files Client network, as in option A.
Option D (Configure the Data Services IP in Prism Central with an IP on VLAN 520): Incorrect. As with option B, the Data Services IP is for iSCSI traffic, not CIFS/NFS traffic. Additionally, the Data Services IP is configured in Prism Element, not Prism Central, making this option doubly incorrect.
Why Option A?
Nutanix Files requires a Client network for CIFS and NFS traffic. By creating a new subnet in the cluster’s Network Configuration, assigning it to VLAN 520, and configuring the Files instance to use this subnet as its Client network, the administrator ensures that all CIFS and NFS traffic is routed over VLAN 520, meeting the security policy and ensuring accessibility for end users.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Nutanix Files requires a Client network for CIFS and NFS traffic to end users. To isolate Files traffic on a specific network, create a subnet in the cluster’s Network Configuration in Prism Element, assign it the appropriate VLAN (e.g., VLAN 520), and configure the Files instance to use this subnet as its Client network. This ensures that all client traffic (SMB/NFS) is routed over the specified network.”
What tool can be used to report on a specific user's activity within a Files environment?
Prism Element Alerts menu
Prism Central Activity menu
Data Lens Audit Trails
Files Console Usage
The tool that can be used to report on a specific user’s activity within a Files environment is Data Lens Audit Trails. Data Lens Audit Trails is a feature that provides detailed logs of all file operations performed by users on Files shares and exports, such as create, read, write, delete, rename, move, copy, etc. Data Lens Audit Trails can help administrators track and audit user actions and identify any unauthorized or malicious activities. The administrator can use Data Lens Audit Trails to filter and search for a specific user’s activity based on various criteria, such as file name, file type, file size, file path, file share, file server, operation type, operation time, operation status, and so on. References: Nutanix Files Administration Guide, page 98; Nutanix Data Lens User Guide
Nutanix Files, part of Nutanix Unified Storage (NUS), supports monitoring and reporting on user activities to track file access, modifications, and other operations. To report on a specific user’s activity, a tool that provides detailed audit trails at the file level is required.
Analysis of Options:
Option A (Prism Element Alerts menu): Incorrect. The Alerts menu in Prism Element provides cluster-level alerts (e.g., hardware failures, storage issues), but it does not offer detailed user activity reports for Files shares.
Option B (Files Console Usage): Incorrect. The Files Console provides usage statistics for shares (e.g., storage consumption, share-level metrics), but it does not provide granular user activity reports or audit trails for specific users.
Option C (Data Lens Audit Trails): Correct. Nutanix Data Lens, a service integrated with Nutanix Files, provides audit trails that track user activities at the file level. This includes details such as file access, modifications, deletions, and permission changes, allowing administrators to report on a specific user’s actions within the Files environment.
Option D (Prism Central Activity menu): Incorrect. The Activity menu in Prism Central provides high-level activity logs for cluster operations (e.g., VM creation, policy updates), but it does not provide detailed file-level user activity reports for Nutanix Files.
Why Data Lens Audit Trails?
Nutanix Data Lens is designed for data governance and security, offering features like audit trails, anomaly detection, and ransomware protection. The Audit Trails feature specifically allows administrators to filter and report on user activities, such as which files a user accessed, modified, or deleted, making it the ideal tool for this task.
Exact Extract from Nutanix Documentation:
From the Nutanix Data Lens Administration Guide (available on the Nutanix Portal):
“Data Lens Audit Trails provide detailed tracking of user activities within Nutanix Files shares. Administrators can view and filter audit logs to report on specific user actions, including file access, modifications, deletions, and permission changes. This feature is accessible via the Data Lens dashboard.”
What is the result of an administrator applying the lifecycle policy "Expire current objects after # days/months/years" to an object with versioning enabled?
The policy deletes any past versions of the object after the specified time and does not delete any current version of the object.
The policy deletes the current version of the object after the specified time and does not delete any past versions of the object.
The policy does not delete the current version of the object after the specified time and does not delete any past versions of the object.
The policy deletes any past versions of the object after the specified time and deletes any current version of the object.
Nutanix Objects, part of Nutanix Unified Storage (NUS), supports lifecycle policies to manage the retention and expiration of objects in a bucket. When versioning is enabled, a bucket can store multiple versions of an object, with the “current version” being the latest version and “past versions” being older iterations. The lifecycle policy “Expire current objects after # days/months/years” specifically targets the current version of an object.
Analysis of Options:
Option A (The policy deletes any past versions of the object after the specified time and does not delete any current version of the object): Incorrect. The “Expire current objects” policy targets the current version, not past versions. A separate lifecycle rule (e.g., “Expire non-current versions”) would be needed to delete past versions.
Option B (The policy deletes the current version of the object after the specified time and does not delete any past versions of the object): Correct. The “Expire current objects” policy deletes the current version of an object after the specified time period (e.g., # days/months/years). Since versioning is enabled, past versions are not affected by this policy and remain in the bucket unless a separate rule targets them.
Option C (The policy does not delete the current version of the object after the specified time and does not delete any past versions of the object): Incorrect. The policy explicitly states that it expires (deletes) the current version after the specified time, so this option contradicts the policy’s purpose.
Option D (The policy deletes any past versions of the object after the specified time and deletes any current version of the object): Incorrect. The “Expire current objects” policy does not target past versions—it only deletes the current version after the specified time.
Why Option B?
When versioning is enabled, the lifecycle policy “Expire current objects after # days/months/years” applies only to the current version of the object. After the specified time, the current version is deleted, and the most recent past version becomes the new current version (if no new uploads occur). Past versions are not deleted unless a separate lifecycle rule (e.g., for non-current versions) is applied.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“When versioning is enabled on a bucket, the lifecycle policy ‘Expire current objects after # days/months/years’ deletes the current version of an object after the specified time period. Past versions of the object are not affected by this policy and will remain in the bucket unless a separate lifecycle rule is applied to expire non-current versions.”
An administrator has connected 100 users to multiple Files shares to perform read and write activity. The administrator needs to view audit trails in File Analytics of these 100 users. From which two Audit Trail options can the administrator choose to satisfy this task? (Choose two.)
Share Name
Client IP
Directory
Folders
Nutanix File Analytics, part of Nutanix Unified Storage (NUS), provides audit trails to track user activities within Nutanix Files shares. Audit trails include details such as who accessed a file, from where, and what actions were performed. The administrator needs to view the audit trails for 100 users, which requires filtering or grouping the audit data by relevant criteria.
Analysis of Options:
Option A (Share Name): Correct. Audit trails in File Analytics can be filtered by Share Name, allowing the administrator to view activities specific to a particular share. Since the 100 users are connected to multiple shares, filtering by Share Name helps narrow down the audit trails to the shares being accessed by these users, making it easier to analyze their activities.
Option B (Client IP): Correct. File Analytics audit trails include the Client IP address from which a user accesses a share (as noted in Question 14). Filtering by Client IP allows the administrator to track the activities of users based on their IP addresses, which can be useful if the 100 users are accessing shares from known IPs, helping to identify their read/write activities.
Option C (Directory): Incorrect. While audit trails track file and directory-level operations, “Directory” is not a standard filter option in File Analytics audit trails. The audit trails can show activities within directories, but the primary filtering options are more granular (e.g., by file) or higher-level (e.g., by share).
Option D (Folders): Incorrect. Similar to “Directory,” “Folders” is not a standard filter option in File Analytics audit trails. While folder-level activities are logged, the audit trails are typically filtered by Share Name, Client IP, or specific files, not by a generic “Folders” category.
Selected Options:
A: Filtering by Share Name allows the administrator to focus on the specific shares accessed by the 100 users.
B: Filtering by Client IP enables tracking user activities based on their IP addresses, which is useful for identifying the 100 users’ actions across multiple shares.
Exact Extract from Nutanix Documentation:
From the Nutanix File Analytics Administration Guide (available on the Nutanix Portal):
“File Analytics Audit Trails allow administrators to filter user activities by various criteria, including Share Name and Client IP. Filtering by Share Name enables viewing activities on a specific share, while filtering by Client IP helps track user actions based on their source IP address.”
An administrator has received an alert AI60068 – ADSDuplicationIPDetected details of alert as follows:
Which error log should the administrator review to determine the related Duplicate IP address involved?
Tcpkill.log
Minerva_cvm.log
Solver.log
Minerva.nvm.log
TESTED 26 Jun 2025