HCSP-Presales – Data Center Network Planning and Design V1.0 Questions and Answers
Which of the following technologies can be used together with VXLAN to implement active-active access?
Options:
Smart Link
BFD
M-LAG
EVPN
Answer:
C, DExplanation:
In Huawei CloudFabric VXLAN-based data center networks, active-active access is a key design goal to ensure high availability, load balancing, and optimal resource utilization.
M-LAG (C) is one of the primary technologies used to achieve active-active access at the access layer. It allows a server or downstream device to connect to two leaf switches simultaneously, both of which actively forward traffic. This eliminates single points of failure and ensures link redundancy with load sharing.
EVPN (D) , when used with VXLAN, provides the control plane for MAC/IP route distribution. It enables multi-homing and supports all-active forwarding , ensuring consistent forwarding decisions across the fabric. EVPN also prevents loops and optimizes traffic forwarding using mechanisms like aliasing and mass withdrawal.
Smart Link (A) is typically used for primary/backup (active-standby) scenarios, not active-active. BFD (B) is a fast fault detection protocol and does not provide active-active forwarding capability.
Huawei best practices recommend combining VXLAN + EVPN + M-LAG to achieve highly reliable, scalable, and active-active data center access architectures .
Therefore, the correct answers are C and D .
How long does it take to perform a switchover in M-LAG 2.0 upon a single point of failure?
Options:
20 ms
100 ms
5 ms
1 s
Answer:
CExplanation:
Huawei M-LAG 2.0 is designed to provide ultra-fast convergence and high availability in modern data center networks. One of its key enhancements over traditional M-LAG implementations is the ability to achieve fast fault detection and rapid switchover , minimizing service interruption.
In the event of a single point of failure (such as link or device failure), M-LAG 2.0 leverages mechanisms like BFD (Bidirectional Forwarding Detection) and optimized peer-link synchronization to quickly detect failures and redirect traffic. The switchover time is typically around 5 milliseconds , ensuring near hitless failover for critical services.
This ultra-fast convergence is particularly important for applications such as AI workloads, financial systems, and real-time services , where even small delays can impact performance.
Compared to traditional technologies (which may take tens or hundreds of milliseconds), Huawei M-LAG 2.0 significantly improves network reliability, service continuity, and user experience .
Therefore, the correct answer is C (5 ms) .
iMaster NCE-Fabric can manage Huawei CloudEngine switches in either in-band or out-of-band mode. In in-band mode, iMaster NCE-Fabric uses northbound interfaces to manage switches through the service network. In out-of-band mode, southbound interfaces of iMaster NCE-Fabric are connected to the management interfaces of switches through an independent management network.
Options:
TRUE
FALSE
Answer:
BExplanation:
The statement is false due to an incorrect description of in-band management communication .
Huawei iMaster NCE-Fabric indeed supports both in-band and out-of-band deployment modes , but the interfaces used are misrepresented:
In in-band mode , iMaster NCE-Fabric uses southbound interfaces (not northbound) to communicate with switches over the service network . These southbound interfaces are responsible for delivering configurations and collecting device information.
In out-of-band mode , the description is correct: the controller connects to device management interfaces via a dedicated management network , ensuring separation from service traffic.
The confusion arises because:
Northbound interfaces are used for communication between the controller and upper-layer systems (e.g., cloud platforms like OpenStack or ManageOne).
Southbound interfaces are used for communication between the controller and network devices .
Huawei architecture strictly separates these roles to enable clear control-plane layering and automation .
Therefore, because the statement incorrectly mentions northbound interfaces in in-band mode , the correct answer is FALSE .
In Huawei ' s CloudFabric Network Virtualization Solution, which of the following software connects to the server virtualization platform?
Options:
iMaster NCE-Fabric
iMaster NCE-FabricInsight
Answer:
AExplanation:
In Huawei CloudFabric architecture, iMaster NCE-Fabric acts as the central network controller and automation platform that directly integrates with server virtualization platforms such as FusionSphere, VMware vCenter, and OpenStack . This integration enables automated network provisioning, policy enforcement, and dynamic resource orchestration between compute and network layers.
Through northbound APIs, iMaster NCE-Fabric communicates with virtualization platforms to obtain tenant, VM, and network information. It then translates these requirements into fabric configurations, such as VXLAN, EVPN, and underlay routing policies. This tight integration is essential for automated service deployment and end-to-end orchestration , which is a core capability in Huawei’s CloudFabric solution.
On the other hand, iMaster NCE-FabricInsight is focused on intelligent O & M, monitoring, and analytics . It provides telemetry-based visibility, fault detection, and performance optimization, but it does not directly connect to or integrate with virtualization platforms for service provisioning.
Therefore, the correct answer is iMaster NCE-Fabric , as it is the component responsible for connecting to server virtualization systems and enabling automation.
Which of the following are characteristics of distributed VXLAN gateways?
Options:
A distributed VXLAN gateway (leaf node) only needs to learn the ARP entries of servers connected to it, whereas a centralized Layer 3 VXLAN gateway needs to learn the ARP entries of all servers on a network. Therefore, the number of ARP entries supported is no longer a bottleneck on distributed VXLAN gateways, and the network scalability is improved.
Forwarding paths are not optimal. Inter-subnet Layer 3 traffic between devices connected to the same gateway in a data center must be transmitted to a unified Layer 3 gateway for forwarding.
A leaf node can function as both a Layer 2 VXLAN gateway and a Layer 3 VXLAN gateway, supporting flexible deployment.
The number of ARP entries supported is a bottleneck. A single Layer 3 gateway is used. For tenants whose traffic is forwarded by the Layer 3 gateway, ARP entries must be generated for the tenants on the Layer 3 gateway, but only a limited number of ARP entries are allowed by the Layer 3 gateway, which impedes data center network expansion.
Answer:
A, CExplanation:
In Huawei CloudFabric VXLAN design, distributed gateway architecture is a key enhancement over traditional centralized gateway models, especially for large-scale data centers.
Option A is correct because distributed gateways (typically deployed on leaf nodes) only maintain local ARP/MAC entries for directly connected hosts. This significantly reduces ARP table pressure compared to centralized gateways, where a single device must learn all entries. This improves scalability and performance , which is critical in multi-tenant environments.
Option C is also correct as Huawei leaf switches can simultaneously act as Layer 2 VXLAN gateways (bridging) and Layer 3 VXLAN gateways (routing) . This enables distributed inter-subnet routing directly at the access layer, reducing latency and improving east-west traffic efficiency.
Option B is incorrect because it describes a limitation of centralized gateways , where traffic must traverse a central node, leading to suboptimal paths. Distributed gateways eliminate this issue.
Option D is also incorrect as it again describes centralized gateway constraints, not distributed ones.
Thus, distributed VXLAN gateways provide better scalability, optimal forwarding paths, and flexible deployment , making A and C correct .
(Which of the following statements is false about the access design of server leaf nodes?)
Options:
Determine the number of server leaf nodes based on the number of servers.
Select the model of server leaf nodes depending on whether microsegmentation or IPv6 deployment or evolution towards them is required.
The M-LAG, stacking, and standalone modes are often used for server access. Stacking is recommended because it can ensure service continuity during the upgrade of access switches.
Select the model of server leaf nodes based on the server access bandwidth (10GE/25GE access) and the ratio of server leaf nodes ' uplink bandwidth to spine nodes ' downlink bandwidth.
Answer:
CExplanation:
According to Huawei data center network design best practices, server access layer design emphasizes high reliability, scalability, and simplicity. Options A, B, and D correctly reflect Huawei recommendations. The number of leaf nodes is indeed determined by server scale and port requirements. Device selection must consider advanced features such as microsegmentation and IPv6 readiness. Additionally, bandwidth planning (downlink server access and uplink oversubscription ratios) is a critical design factor.
However, option C is incorrect. While M-LAG and standalone modes are commonly used in modern data center designs, stacking is generally not recommended in CloudFabric architectures. Huawei discourages stacking in spine-leaf fabrics because it introduces control plane complexity, reduces network scalability, and may create larger fault domains. Instead, M-LAG is preferred as it provides active-active forwarding, better fault isolation, and supports smooth upgrades without impacting services.
Therefore, stating that stacking is recommended is incorrect, making option C the false statement.
(What are the application scenarios of the overlay network?)
Options:
Private cloud (converged resource pool deployment and data center integration)
Hosting services in traditional IDCs
Public cloud service (IaaS/PaaS/SaaS)
Network NFV cloud (network cloud and SDN + NFV)
Answer:
A, B, C, DExplanation:
Huawei data center design documents clearly state that overlay networks (typically based on VXLAN EVPN) are highly versatile and support multiple application scenarios across modern and traditional environments.
In private cloud scenarios , overlay networks enable resource pooling and seamless data center interconnection, allowing flexible workload migration and multi-tenant isolation. For traditional IDC hosting services , overlays provide improved scalability and tenant isolation compared to VLAN-based designs, making them suitable for legacy-to-cloud evolution.
In public cloud environments (IaaS/PaaS/SaaS) , overlay networks are essential for large-scale multi-tenancy, enabling logical network segmentation, automation, and elastic service provisioning. Similarly, in NFV cloud scenarios , overlays integrate with SDN and NFV architectures to support virtualized network functions, service chaining, and dynamic service deployment.
Huawei CloudFabric emphasizes that overlay technology is not limited to a single use case but is a foundational technology across all modern data center architectures. Therefore, all listed options are valid application scenarios.
Which of the following statements is false about in-band and out-of-band deployment of iMaster NCE-Fabric?
Options:
In in-band deployment mode, the management network and service network share service network ports. The southbound IP address of iMaster NCE-Fabric communicates with the management IP addresses of devices through the service network.
In in-band deployment mode, service network ports have redundant connections, eliminating single points of failure and providing high reliability.
In in-band deployment mode, the management network and service network are isolated and do not affect each other.
In out-of-band deployment mode, iMaster NCE-Fabric connects to the management interfaces of devices through the out-of-band management switch.
Answer:
CExplanation:
Huawei iMaster NCE-Fabric supports both in-band and out-of-band deployment modes, each with distinct characteristics.
In in-band deployment (A, B) :
The management traffic shares the service network , meaning no separate management network is required.
The controller communicates with device management IPs over the service (data) network .
Redundant links can be configured to ensure high reliability and avoid single points of failure.
Therefore, A and B are correct descriptions .
Option C is false because it incorrectly states that the management and service networks are isolated in in-band mode. In reality, they are NOT isolated —they share the same physical infrastructure, which is a defining feature of in-band deployment.
In out-of-band deployment (D) :
A separate management network is used.
iMaster NCE-Fabric connects to devices via dedicated management interfaces through a management switch , ensuring isolation from service traffic.
Huawei recommends choosing deployment mode based on network scale, security, and reliability requirements .
Therefore, the correct answer is C .
Which of the following statements is false about the overlay network or underlay network?
Options:
The overlay network requires bare optical fibers for links.
The underlay network is transparent to devices (such as servers, VAS devices, and external routers) that are connected to NVEs.
The overlay network has an independent forwarding plane and an independent control protocol, which are VXLAN and BGP EVPN, respectively.
The overlay network is a logical network that is established over the underlay network through VXLAN.
Answer:
AExplanation:
In Huawei CloudFabric architecture, the underlay network is the physical IP fabric, while the overlay network is a logical network built on top of it using technologies like VXLAN and BGP EVPN .
Option A is false because the overlay network does not require bare optical fibers . It operates independently of the physical medium and can run over any IP-based underlay (Ethernet, optical, etc.). The overlay abstracts the physical infrastructure.
Option B is correct : the underlay network is transparent to endpoints (servers, firewalls, routers). These devices only interact with the overlay and are unaware of the underlying transport.
Option C is correct : the overlay has its own forwarding plane (VXLAN encapsulation) and control plane (BGP EVPN) , enabling scalable and automated network virtualization.
Option D is correct : the overlay is a logical network built over the underlay , providing tenant isolation and flexible service deployment.
Huawei best practices emphasize decoupling overlay from underlay , ensuring flexibility, scalability, and simplified operations.
Therefore, the correct answer is A .
Which of the following technologies can be used for L4-L7 load balancing in Huawei ' s CloudFabric Solution?
Options:
HAProxy
Nginx
IP ECMP
LVS
Answer:
B, DExplanation:
Comprehensive and Detailed 150 to 200 words of Explanation From Huawei data center:
After rechecking against Huawei’s own materials, the verified multi-select answer is B and D . Huawei documentation for Elastic Load Balance, which is the load-balancing architecture referenced in Huawei cloud/data center designs, states that for Layer 4 traffic using TCP/UDP , incoming traffic is routed only through the LVS cluster . For Layer 7 traffic using HTTP/HTTPS , incoming traffic is routed first to the LVS cluster and then to the Nginx cluster before reaching backend servers. That means Huawei explicitly uses LVS for L4 load balancing and Nginx together with LVS for L7 load balancing.
By contrast, IP ECMP is a Layer 3 equal-cost forwarding/load-sharing mechanism, not an L4-L7 service load balancer in Huawei’s CloudFabric context. Huawei’s ECMP documentation discusses route-based traffic distribution in the IP layer, which is fundamentally different from application/service load balancing.
In the CloudFabric Solution, firewalls can connect to service leaf nodes or border leaf nodes (combined with service leaf nodes).
Options:
TRUE
FALSE
Answer:
AExplanation:
In Huawei CloudFabric architecture, service integration and flexible deployment of security devices such as firewalls are key design principles. Firewalls can be deployed in multiple ways depending on service requirements, traffic patterns, and scalability considerations.
Firewalls can be connected to:
Service leaf nodes : These are dedicated nodes used for service insertion (e.g., firewall, load balancer). Traffic can be steered through these nodes using policy-based routing or service chaining.
Border leaf nodes (combined with service leaf role) : In some designs, border leaf nodes (which connect to external networks such as WAN or Internet) can also integrate service functions, including firewall connectivity. This reduces hardware requirements and simplifies deployment.
Huawei supports both centralized and distributed service deployment models , allowing firewalls to be flexibly inserted into the network fabric. Integration is typically achieved using VXLAN, EVPN, and service chaining technologies , ensuring seamless traffic steering and policy enforcement.
This flexibility enhances:
Network scalability
Security enforcement
Operational simplicity
Therefore, the statement is TRUE .
Which of the following is not an advantage of the spine-leaf networking over the traditional networking?
Options:
Resources are pooled between zones to avoid uneven resource distribution.
The logical two-layer architecture has a high oversubscription ratio.
The flattened network features redundancy, reliability, and high throughput, facilitating east-west network capacity expansion.
Each leaf node is connected to one or two spine nodes to enable communication between all types of nodes connected to the network.
Answer:
BExplanation:
Huawei’s data center design guidelines emphasize that spine-leaf architecture is a flattened, two-layer fabric designed to overcome the limitations of traditional three-tier networks. One of its key advantages is low and predictable oversubscription ratios , achieved through equal-cost multi-path (ECMP) forwarding and uniform link distribution.
Option A is correct as an advantage because spine-leaf enables resource pooling across zones , eliminating isolated silos and improving utilization. Option C is also a core benefit: the architecture provides high reliability, redundancy, and scalable east-west traffic capacity , which is critical for cloud and distributed applications. Option D reflects the fundamental design principle where each leaf connects to multiple spine switches , ensuring full-mesh reachability and consistent latency.
However, Option B states that the architecture has a high oversubscription ratio , which contradicts Huawei best practices. In fact, spine-leaf is specifically designed to minimize oversubscription , ensuring balanced traffic and predictable performance.
Therefore, B is not an advantage , making it the correct answer.
Which of the following technologies or protocols natively support encryption?
Options:
VXLAN
GRE
SSL VPN
IPsec VPN
Answer:
C, DExplanation:
In Huawei data center and network security architecture, native encryption support is a key differentiator between tunneling technologies and secure communication protocols.
SSL VPN (C) and IPsec VPN (D) both provide built-in encryption mechanisms . SSL VPN uses TLS/SSL protocols to encrypt application-layer traffic, making it suitable for secure remote access. IPsec VPN operates at the network layer and provides confidentiality, integrity, and authentication through protocols such as ESP (Encapsulating Security Payload), making it widely used for site-to-site and data center interconnection security.
On the other hand, VXLAN (A) and GRE (B) are encapsulation/tunneling technologies that do not inherently provide encryption. VXLAN is used for overlay networking (e.g., tenant isolation in CloudFabric), while GRE is used for simple tunneling across IP networks. Both require additional security mechanisms (such as IPsec) if encryption is needed.
Huawei CloudFabric design guidelines clearly separate overlay encapsulation (VXLAN) from security protocols (IPsec/SSL) , emphasizing that encryption must be implemented using dedicated security technologies.
Therefore, the correct answers are C and D .
In manual mode, an Eth-Trunk is manually created and interfaces are manually added to the Eth-Trunk, without involving LACP. All active links load balance and forward data.
Options:
TRUE
FALSE
Answer:
AExplanation:
In Huawei networking, Eth-Trunk (link aggregation) can operate in two modes: manual mode and LACP mode . In manual mode , the Eth-Trunk is statically configured by the administrator, and member interfaces are manually added without using the Link Aggregation Control Protocol (LACP) .
In this mode, all member links that are up are considered active by default , and they simultaneously participate in traffic forwarding and load balancing . Traffic distribution is typically based on hash algorithms (e.g., source/destination MAC or IP), ensuring efficient utilization of available bandwidth.
Unlike LACP mode, manual mode does not perform dynamic negotiation or link state detection between devices. Therefore, it requires consistent configuration on both ends to avoid issues such as loops or traffic blackholing. However, it is simpler and can be useful in controlled environments.
In Huawei data center designs, especially in M-LAG scenarios , LACP is generally preferred for better reliability and fault detection. Still, the statement accurately describes manual mode behavior.
Therefore, the statement is TRUE .
Which of the following multi-DC interconnection solutions are supported by the CloudFabric Solution?
Options:
Multi-PoD
Multi-VPC
Multi-Site
Multi-DC
Answer:
A, C, DExplanation:
Huawei CloudFabric supports multiple data center interconnection (DCI) solutions to meet different scalability and geographic deployment requirements.
Multi-PoD (A) is supported and is used within a single data center to divide it into multiple Pods for modular expansion. It enables large-scale resource pooling while maintaining consistent architecture and simplified management.
Multi-Site (C) is a key CloudFabric capability, allowing multiple geographically distributed data centers to interconnect. This supports disaster recovery (DR), geo-redundancy, and workload migration , often implemented using EVPN/VXLAN across sites.
Multi-DC (D) is a broader concept encompassing interconnection between multiple data centers, including active-active and active-standby designs. Huawei supports this through DCI solutions integrated with CloudFabric , ensuring seamless service continuity.
Multi-VPC (B) is not a standard Huawei CloudFabric multi-DC interconnection model; it is more commonly associated with public cloud networking concepts rather than Huawei’s data center fabric architecture.
Therefore, the correct answers are A, C, and D , which represent the supported multi-data-center interconnection solutions in Huawei CloudFabric.
In each VPC, you can create one logical router, multiple logical switches, and multiple logical ports.
Options:
TRUE
FALSE
Answer:
AExplanation:
In Huawei CloudFabric and cloud networking models (aligned with OpenStack and SDN-based architectures), a VPC (Virtual Private Cloud) represents an isolated tenant network environment that contains multiple logical networking components.
Within a VPC:
A logical router provides Layer 3 routing and interconnection between subnets . Typically, one logical router is associated with a VPC to manage north-south and east-west routing.
Multiple logical switches (VXLAN networks or BDs) can be created to represent different Layer 2 segments within the same tenant environment.
Each logical switch can have multiple logical ports , which represent VM interfaces, containers, or service endpoints connected to the network.
This hierarchical design allows:
Flexible network segmentation
Multi-tier application deployment (web/app/db tiers)
Scalable tenant isolation
Huawei CloudFabric, integrated with platforms like iMaster NCE-Fabric and ManageOne , automates the creation and orchestration of these components.
Therefore, the statement is TRUE , as it accurately reflects the standard VPC network model.
Which of the following statements is false about IP routes?
Options:
The optimal route to a specific destination can be determined by only one routing protocol at a certain moment. To determine the optimal route, all routing protocols (including static routing) are configured with priorities.
Direct routes are the routes destined for the network segment to which directly connected interfaces belong.
Each routing protocol can import routes discovered by other routing protocols, direct routes, and static routes.
If routes to the same destination network are discovered by two routing protocols, the cost values of the routes are first compared, and then their priorities.
Answer:
DExplanation:
In Huawei routing principles, route selection follows a strict hierarchy. When multiple routing protocols advertise routes to the same destination, the system first compares the route preference (priority) , not the cost. The route with the lowest preference value (highest priority) is selected. Only within the same routing protocol are cost/metric values compared to determine the best route.
Option A is correct because only one optimal route is installed in the routing table at a time, based on protocol preference. Option B is also correct: direct routes are automatically generated for networks connected to local interfaces. Option C is valid as Huawei devices support route import (redistribution) between routing protocols, enabling flexible network design.
Option D is incorrect because it reverses the decision logic. It incorrectly states that cost is compared before priority, which contradicts Huawei’s routing selection process.
Therefore, the false statement is D .
(In the CloudFabric Solution, logical switches are abstracted from the Layer 2 VXLAN and provide Layer 3 routing and gateway services for logical ports.)
Options:
TRUE
FALSE
Answer:
AExplanation:
In Huawei CloudFabric architecture, logical switches are a key abstraction used to simplify network service deployment and management. These logical switches are built on top of VXLAN Layer 2 networks and represent virtualized broadcast domains (similar to VLANs, but more scalable).
Huawei extends the function of these logical switches by integrating distributed Layer 3 gateway capabilities directly into them. This means that each logical switch can provide both Layer 2 connectivity and Layer 3 routing services (commonly referred to as distributed gateways). These gateways are typically implemented on leaf nodes using VXLAN with BGP EVPN as the control plane.
As a result, traffic between subnets can be routed locally at the ingress leaf switch without needing to traverse to a centralized gateway, significantly improving efficiency and reducing latency. Logical ports connected to these logical switches automatically receive gateway services, enabling seamless inter-subnet communication.
This design aligns with Huawei’s intent of decoupling physical and logical networks while providing scalable, agile, and highly efficient data center networking. Therefore, the statement is TRUE.