Zero-Touch Provisioning | • Automatically identifies and manages network devices to implement automatic deployment of underlay networks. |
Network Service Provisioning | • Supports interconnection with the mainstream cloud platform OpenStack or third-party applications from Layer 2 to Layer 7. The cloud platform or third-party applications invoke the standard interfaces to provision network services. • Supports independent network service provisioning (including association with computing platforms) to implement automatic network deployment. |
Fabric Management | • Uses the standard VXLAN protocol to implement automatic network deployment, including VXLAN protocol encapsulation. The iMaster NCE-Fabric also supports VXLAN Layer 2 and Layer 3 interconnection and interconnection between VXLAN and traditional networks. • Supports various VXLAN networking scenarios and management and control of software and hardware network devices. • It also allows hybrid access of multiple types of terminals such as physical servers, VMs, and bare metal servers in different scenarios. |
Service Function Chain | • Supports the IETF-based SFC model and adopts PBR or NSH as traffic diversion technologies to guide the service traffic to different nodes for service processing. In this way, the topology-independent SFC function with graphical orchestration and automatic configuration is implemented. • Provides VAS services, including security policy, NAT, and IPSec VPN. |
Network Security | • Supports microsegmentation and implements security isolation based on refined groups, such as subnets, IP addresses, VM names, host names. • Supports role-based access control to implement isolation between multiple tenants and management of multiple users' accounts and rights. • Supports password-based local authentication and security authentication, such as RADIUS and AD.
|
O&M and Fault Location | • Supports monitoring of physical, logical, and tenant resources. • Supports visibility of the application, logical, and physical network topologies. Mappings from the application to logical topology, and from the logical topology to physical topology can also be displayed. • Displays forwarding paths of VTEPs and VMs in VXLAN scenarios, implementing precise location from the logical network to the physical network. • Supports intelligent loop detection and provides one-click repair. • Supports detection of Layer 2 or Layer 3 network connectivity between VMs, as well as between VMs and external networks, through IP Ping and MAC Ping, helping administrators quickly rectify faults. • Supports traffic mirroring (traffic on VMs or bare metals can be mirrored to remote addresses through GRE tunnels). |
Reliability | • Adopts distributed cluster deployment. A single cluster supports a maximum of 128 member nodes. The service control node supports dynamic expansion without service interruption. • Supports deployment of cluster members in the same Layer 2 network or across a Layer 3 network as long as routes between cluster members are reachable. • Load balances the northbound cloud platform API requests or web access to different controller nodes. • Supports southbound load balancing capability. Devices on the entire network are evenly distributed for management by different controller nodes. If a fault occurs on one of the controller nodes, the network devices managed by it can be smoothly switched to other normal nodes to avoid service interruption. • Supports active/standby mode to implement highly reliable remote disaster recovery. |
Openness | • Based on ONOS and compatible with ODL architecture. • Supports northbound interfaces such as RESTful, RestConf, WebService, and Syslog from Layer 2 to Layer 7. Supports interconnection with the mainstream OpenStack platform (standard OpenStack, Red Hat, Mirantis, and UnitedStack) with Neutron plug in. • Supports interconnection with physical and virtual network devices using southbound protocols, such as SNMP, NETCONF, OpenFlow (1.3/1.4), OVSDB, JSON-RPC, and sFlow. • Supports interconnection with a computing resource management system, such as VMware vCenter and Microsoft System Center, for collaboration with network and computing resources. |
Management Capacity and Performance | Single-node cluster configuration • Number of managed physical network devices: 600 • Number of managed physical servers: 3,000 • Number of managed VMs: 60,000 Typical configuration: Three nodes • Number of managed physical network devices: 1,800 • Number of managed physical servers: 9,000 • Number of managed VMs: 180,000 • VM online rate: 200 per second Typical configuration: Five nodes • Number of managed physical network devices: 3,000 • Number of managed physical servers: 15,000 • Number of managed VMs: 300,000 • VM online rate: 350 per second
|