Transformation from Layered Partitioned Traditional Architecture to Cloud Network Architecture - Joint Research and Application Practice of SDN-based Next-Generation Financial Cloud Network
Editor's Note: Financial cloud construction is a major and complex system engineering work with integrated technology innovation and collaborative industry innovation. Financial institutions should base their technology research and development on the core of financial technology, focus on SDN and other technology applications of financial institutions' special technology solutions, and focus on industry cooperation and innovation. This article is reprinted from Financial e.
Author / China UnionPay Zu Lijun, Yuan Hang, Zhou Yongkai
Bank of Shanghai Ma Yongxiang, Wang Minghui, Jinns
introduction
China UnionPay and Bank of Shanghai have reached cooperation on financial cloud and SDN technology research, etc., in which China UnionPay's National Engineering Laboratory for E-Commerce and E-Payment and Bank of Shanghai Data Center have formed a joint research team to carry out SDN-based next-generation financial cloud network architecture technology research with the next-generation financial cloud data center as a blueprint. (Project funded by Shanghai Science and Technology Commission (17YF1425800): Research and Application of Key Technologies for Financial Industry Cloud)
The joint research team believes that cloud network architecture is the most central and critical factor for institutions to carry out financial cloud infrastructure platform construction, which is mainly reflected in the following aspects;
However, due to the high difficulty of network technology change and wide application impact, the current global financial institutions cloud network architecture implementation practice is still in the initial stage, no standardized and unified architecture practice reference. In this regard, after nearly two years of efforts, the joint research team has proposed a reference model for applying SDN to the cloud network under the current financial data center network architecture, and has been verified in practice on the ground, in which the main work completed includes.
The application of the above-mentioned cloud network architecture model enables a new generation of network architecture that meets the cloud requirements of the financial industry, the effects of which include, among others.
In the research process, we recognize that financial cloud computing technology research is a complex system engineering of technology integration and innovation and industrial collaborative innovation, and the independent and controllable financial cloud computing technology needs good industrial ecological support, so we also synchronously maintain close communication with domestic and foreign industries, explore relevant technologies in the financial industry scope in the form of working groups to deepen joint research, share technical research results within the working groups, and jointly cooperate with domestic and foreign cloud computing organizations with the group strength of working groups to promote the research and progress of financial cloud technology, accelerate the change of financial technology, and support the innovative development of financial services.
Current state of network architecture in the financial industry
The history of network construction in the financial industry is inseparable from the process of informatization in the financial industry for nearly three decades, and has gone through several stages of development so far.
At this stage, the overall network of the financial industry is mostly based on a "two locations, three centers" structure. DWDM and high-bandwidth private lines are used to build a high-speed forwarding "core backbone network" for interconnection between data centers, with the data centers serving as backbone access nodes. The "three-tier network" architecture is extended on the core backbone framework for the convergence of branches at all levels, with a primary network between the data centre and the primary branch, a secondary network between the primary branch and the secondary branch, and a tertiary network between the secondary and tertiary branches (see figure 1).
Figure 1 Overall diagram of the stage financial industry network
The data center adopts a "bus-type, modular" architecture, following the principle of "vertical layering and horizontal partitioning", and divides the network into multiple areas according to different types of application systems, differences in importance, and differences in security protection needs, and builds a switching bus through high-performance switches, with each part of the network interconnected through the bus (see Figure 2).
Figure 2 Schematic diagram of network partitioning within the data center
Network zoning can be divided into three main categories: business zones, isolated zones, and specific functional zones.
Business area: used to host all kinds of system application servers and database servers, application systems are divided into different business areas according to specific principles.
Isolation Zone: Also known as the DMZ zone, hosts various types of prefixes that provide services to the Internet or third-party organizations.
Specific functional areas: e.g. management area carrying monitoring systems, process systems, operation terminals, etc. for data center maintenance; WAN area user data center to backbone network connectivity.
Traditional architecture advantages: secure, stable, reliable, and scalable.
Architecture disadvantages: lack of flexibility, presence of shaft barriers, poor automation, high construction costs.
金融数据中心网络面临的挑战
1. Challenges faced
At present, "the development of transformative innovation in business applications", "the application of new technologies represented by cloud computing" and "new requirements for financial IT cost and efficiency optimization" are the three major challenges facing the development of financial data center networks.
First of all, the development of business applications poses new challenges to network services. The rapid development of financial innovation applications oriented to the Internet approach puts forward more agile service requirements for network services, and the opening and change of network resources is expected to be upgraded from the original weekly unit to the minute level, so as to support the rapid production of applications.
Second, new technologies such as cloud computing are becoming more widely used within data centers, and their new requirements for network connectivity. In order to adapt to virtualization technology, the drift migration capability of resources, network services need to be upgraded from physical machine identification to virtual host identification, the multi-tenant nature of the cloud requires the provision of independently decoupled network address routing space under shared physical network equipment, cloud elastic scaling requires a higher level of elastic scalability of the network, and the huge scale of the cloud platform also requires that cloud network services must also have sufficient network capacity and robustness.
In addition, the operational pressure of financial institutions under the new normal requires IT architecture to be more efficient and low-cost, evolving from purely commercial "product" solutions to "autonomous" network solutions, and financial data center network technology applications becoming more open, requiring network operations and maintenance personnel to be freed from simple manual maintenance and to enter an efficient automated approach.
2. 未来金融数据中心网络应用需求
因此,结合上述发展的挑战与趋势,我们认为未来金融数据中心网络应用需求可总结为高敏捷、高弹性、高可管理、高可用以及高性能的“五高”要求:
(a) Highly agile, to achieve rapid business go-live, to achieve on-demand changes in resources in the face of changes in applications, to break the dilemma of sacrificing efficiency for security through the application of new technologies, and to give equal importance to security and efficiency in the new environment of cloud computing.
高弹性,一是内部弹性强化,打破竖井式架构中网络区域成为限制资源共享的壁垒, realize网络资源池整合与灵活共享与隔离,二是外部弹性兼容,支持新老架构并存,从而使原有网络可以平滑过渡到新架构;
高可管理,一是 realize管理的体系的简化,支持多品牌的融合管理,二是 realize管理自动化与智能化,使日常运维从大量人工维护的高工作量解放出来;
高可用,网络架构持续稳定影响金融数据中心全局服务能力,网络架构需要基于稳定可靠的技术构建,使网络服务具备7*24小时业务连续性服务的能力;
高性能,面对秒杀等新业务场景等的极限服务能力, realize时延和带宽等关键指标的跨越式提升,同时注重资源的高效利用,用尽可能少的资源 realize最大的性能服务。
SDN-based Network Design and Vision for Next-Generation Financial Institutions
1. 网络设计原则
The application of SDN technology is a revolutionary change to the architecture of financial data centers, so the next-generation financial data center network needs to be designed in a targeted manner.
A service-oriented concept, where network functions are provided externally in the form of services, standard API interfaces, and network systems are self-organized internally in the form of services, thereby enhancing external service capabilities and simplifying the complexity of external calls to network capabilities.
(a) Unified management orchestration, with a unified view of the management interface for network layer 2/3 connectivity and layer 4/7 functions in the data centre, and a two-tier management orchestration approach for the management of different network resource pools, i.e., bottom-level adaptation of different network resource pool management operations and upper-level heterogeneous coordination and orchestration.
资源池标准化,打破传统网络竖井式架构,利用新一代的大二层技术构建资源池,在不扩大广播域、不增加二层环路风险的前提下提升计算、存储资源调动灵活性。
Mature technology integration re-engineering innovation, based on the requirements of smooth compatibility of network technologies, inheritance of basic scalable network technologies and protocols, combination of existing technologies for integrated innovation, innovation without loss of stability, to ensure the global stability of the network architecture.
2. Financial Cloud Network Architecture Model Design and Concept
(1) Management control plane model
传统网络中各功能组件,如交换机、 firewall、 load balancing,通常采用不同的设备供应商解决方案,各产品管理接口存在差异(CLI、UI接口各不相同)、普遍不支持API接口。设备参数设置、配置调整大量依赖人工,且对于维护人员的专业技能要求较高。在此背景下,网络的服务化、自动化 realize难度巨大,难以 realize单一的工具或平台对全网设备进行统一管理和服务发布。
随着产业技术的发展,网络产品制造商也逐渐改变产品发展方向,由原先的封闭式设备向接口开放的方向发展。现阶段,业内越来越多的产品开始支持Restful API接口,管理模式不再局限于传统的CLI及UI接口。
In the new architecture of data center, in order to achieve the goals of network service, automation, and unified scheduling, in addition to requiring each network component to support API interfaces, programmability, and other functional features, we believe that in the management and control plane, we also need to further abstract the API interfaces of each brand of network, establish a standard network service model through the cloud network control platform, and interface with each network component. On the one hand , standardize network service interfaces, unify network external interfaces, shield the differences of interfaces of different brands of devices, and simplify the development difficulty of the upper cloud management platform. On the other hand, the complex network parameters are hidden, and the technical implementation and parameter setting of each network component are still done by professional network engineers, and the upper layer builds a cloud management platform focusing on service flow, business orchestration, workflow scheduling, and network standard service invocation (see Figure 3 for illustration).
Figure 3 Network platform control architecture under cloud computing management platform
The cloud network control platform can be further divided into two layers, the service abstraction layer and the driver control layer.
The service abstraction layer is responsible for abstracting network resources into standard network services and models, such as providing create network and create router services, as well as providing standard network service API interfaces to upper layer platforms.
控制驱动层负责将标准的网络服务在具体的产品上 realize,并将上层的API接口翻译成网络组件可识别的接口, realize对网络组件的参数调整。
(2)交换网模型
Traditional switching networks are more stable but less flexible and efficient. Resources such as compute, storage, network, and physical environment of the server room between each network partition are in exclusive mode, and compute hosts cannot share resources between different partitions; virtual machines are not allowed to drift between hosts in different partitions, and compute resource utilization decreases. The switching network and various functional components of small and medium-sized financial institutions generally use 10 Gigabit-class equipment with strong performance, but for security, reliability and compliance considerations, the number of data center network partitions in financial institutions cannot be reduced, and the utilization of network resources is generally low in the case of small customer base size and transaction volume. In terms of space utilization in the server room, the placement of various types of equipment requires comprehensive consideration of cabling, TOR switch deployment, electricity, cooling and many other factors, making it difficult to achieve flexible deployment. If traditional techniques, such as large Layer 2, are used to try to solve the above problems, they will again cause amplification of the broadcast domain, increased risk of Layer 2 loops, and cause tremendous pressure on subsequent operations and maintenance, which will not outweigh the benefits.
Therefore, we believe that the new switch network architecture needs to improve the utilization of compute, storage, and server room space resources without increasing O&M risk. The concept of tenancy is also introduced to segregate and divide a switching network, which can be used for a single financial institution to achieve a consolidated deployment of some of the traditional network partitions, or for a consolidated deployment of multiple financial tenants.
The new switch network architecture of the financial data center still adopts a bus-type architecture, retaining traditional partitions and traditional application access, and adding multiple new cloud network partitions for new application launch or stock application migration. With controlled risk, some of the functionally identical network areas are combined and deployed, for example, multiple business areas are combined into one cloud network partition, and multiple isolated areas are combined into another cloud network partition (shown in Figure 4).
Figure 4 Cloud network model architecture
一个云网分区内由同一品牌的SDN设备组成,内部采用Vxlan分离Underlay、Overlay网络, realize网络物理架构与逻辑架构的解耦。采用Spine+Leaf的物理结构,其中计算Leaf接入提供虚拟机的计算服务器资源,网络功能Leaf接入 load balancing、 firewall等网元服务设备资源,Border Leaf负责与数据中心核心交换设备互联, 云网分区内由Spine设备负责各Leaf之间的流量互通,每个云网分区由其自有的 controller进行管理控制。
(3) Model of interconnection between divisions
The data center core switching network, on the other hand, is networked by independent switching equipment, and different cloud network partitions may use different SDN solutions, protocols and technologies. The Vxlan tags within the cloud network partitions are stripped after the packets exit the region, so interoperability between cloud network partitions is not possible at the Overlay level.
The challenge of the above networking model and functional design within the data center is mainly to identify the tenant traffic that exists in different cloud network partitions so as to ensure that the cloud network partition can correctly forward the multi-tenant traffic with IP address reuse to the correct tenant resource once it passes through the core switching network. After evaluation, we believe that VRF (Virtual Routing Forwarding Table) or MPLS-VPN (Multi-Protocol Label Switching-based Virtual Network) technologies in traditional technologies can be a good solution to the problem of tenant messaging between multiple cloud network partitions, and a VRF or VPN network can be abstracted into a regional interconnection router for connecting different logical regions of the same tenant.
同时,我们提出了一种区域互联SDN控制技术, realize对核心交换网络与各云网分区的SDN controller的协调控制, realize多租户信息标识的传递的隧道能力。
Figure 5 RI data plane forwarding design diagram
Figure 5 presents the design idea in the data forwarding plane, where each tenant is fed into a different virtual routing table VRF (or VPN) for forwarding as it exits its respective cloud network partition, VRF is a virtualization technology that isolates different routes for forwarding inside the switch, realizing the one virtual machine with multiple capabilities.
在相应控制平面,我们设计了一个RI controller realize对核心交换网络的自动化配置能力,其包括2大功能,一是根据租户变动情况动态创建配置VRF及其相关配置,二是 realize在不同SDN 云网分区资源动态变化情况下,路由地址动态发布,以便保持动态变换的网络资源的连通性。
图6 RI controller控制平面设计图
图6给出了RI controller的架构设计,其通过netconf协议管理核心交换网络的设备,同时也通过适配不同SDN controller的API接口定时或触发读取相应云网分区的动态资源信息。在跨异构SDN新租户创建过程中,则会自动根据前述数据转发平面的设计创建相应VRF通道,在有新网段资源创建时,触发RI controller通过SDN控制API查询新增网段信息,并通过OSPF动态路由注入的方式更新核心交换网络中的VRF路由表信息,同时我们设计的RI controller定时任务,定时查询SDN的网络资源信息,对比出已删除的网络资源信息,进行路由表清理工作。
In the specific development process, we take into account the different operation management protocols supported by the core network devices and the new tunneling protocol application methods proposed by different partners in the future, so we support the extension capability of different device control methods and tunneling protocols in the code architecture for future improvement and optimization.
(4) firewall与 load balancing模型
firewall及 load balancing用于提供四到七层网络服务, realize逻辑区域之间的安全隔离、服务器流量分摊。
金融云网架构模型中,可将 firewall及 load balancing等硬件资源进行池化部署,并按需进行调度。通过云控制平台 realize firewall、 load balancing资源池的统一管理。随着VNF技术的日益成熟,在未来, load balancing和 firewall将作为VNF接入到不同业务逻辑区域当中, realize流量的合理编排和调度。
3. Two-site, three-centre model concept
In a highly available scenario where the financial industry generally adopts "two locations and three centers", the network services of the financial industry cloud platform must also support network multi-tenancy capabilities across data centers.
Figure 7 Two-site, three-centre model
The design can follow the design idea of the existing backbone network and introduce MPLS VPN technology to bring the traffic of the same tenant into one VPN network to achieve the scope of resources that can be invoked by a single tenant to cover all data centers and support the access of branch offices. It also takes advantage of MP-BGP's rich routing capabilities and QoS technology to enable tenant information to be delivered across data centers, with inter-tenant and application traffic isolated from each other (shown in Figure 7).
Prototype practice situation
基于以上设计构想,研究小组就单中心模型进行原型验证,详细情况如下:
1. Physical Architecture Overview
图8展示了的原型平台物理架构图,交换核心区域由 Huawei three-layer switch组成,下联2个不同品牌的云网分区,云网分区1为 Huawei设备,云网分区2为 Cisco Systems CompanyACI设备,根据不同云网分区IT服务商的合作沟通, load balancing和 firewall的物理接入位置稍有不同, Huawei云网分区内 load balancing和 firewall接入网关组, Cisco Systems Company云网分区内 load balancing和 firewall接入专用功能Leaf,但逻辑网络架构保持一致。
Figure 8 Prototype platform physical architecture diagram
We built a prototype platform in the data center experimental environment jointly researched by China UnionPay and Bank of Shanghai. The platform consists of two heterogeneous SDN cloud network partitions from Huawei and Cisco, and is also equipped with corresponding load balancing and firewall resources, and the list of platform devices is shown in Table 1.
classify | company | Equipment Type | 设备型号及 version |
---|---|---|---|
SDN Cloud Network Partitioning | Cisco Systems CompanyACI | controller | APIC 2.1.3 |
Spine | N9K-C9396PX | ||
Leaf | N9K-C9336PQ | ||
Huawei AC | AC | AC 2.0 | |
Spine | CE6851-48F6Q | ||
Leaf | CE6850-48P4Q | ||
Core routing devices | Huawei | three-layer switch | CE6850-48P4Q |
firewall | Cisco Systems Company | ASA | ASA 5512 |
Huawei | USG | USG6650 | |
load balancing | F5 | BIG IP | BIG-LTM-1600 |
Table 1: List of prototype platform devices
平台基于OpenStack、OVS、Centos等开源 (computer) software进行研发,相应 (computer) software version情况见表2。
(computer) software | version |
---|---|
OpenStack | L version |
OVS | 2.4.3 |
Centos | 7.0 |
表2: (computer) software version情况表
2. Management Control Platform Overview
使用OpenStack作为云控制平台,起到了承上启下的作用,向上暴露标准化的API给到多租户业务调度。另外一方面它通过底层SDN controller、 firewall驱动、 load balancing驱动和RI模块, realize对下层物理网络资源的抽象、隔离和调度。
3. Cloud network partitioning and cloud control platform integration
云网分区架构下,需要管理的四种资源:二层网络( Cisco Systems CompanyAPIC和 Huawei AC)、三层网关、 firewall和 load balancing。云网分区和云管理平台集成的一般有以下两种模式。
Figure 9 OpenStack integrated SDN cloud network partitioning technology model
Model A通过SDN controller驱动二层和三层,四层以上设备通过OpenStack Neutron直接管理, Mode B通过SDN controller驱动所有设备。我们现有的 firewall和 load balancing分别是 Cisco Systems Company的ASA firewall、 Huawei的USG firewall以及 F5 load balancing设备,下表是两种设备通过 Model A和 Mode B下具备的驱动具备现状。
Model A | Mode B | ||
---|---|---|---|
OpenStack Direct Drive | 通过 Cisco Systems CompanySDN controller(APIC)进行管理 | 通过 HuaweiSDN controller(AC)进行管理 | |
Cisco Systems CompanyASA firewall | × | × | × |
HuaweiUSG firewall | × | × | √ |
F5 load balancing | √ | × | × |
Table 3 Status of device and OpenStack integration drivers
We have chosen a mix of modes A and B for networking.
- Cisco Systems CompanyASA firewall我们重新开发了OpenStack FWaaS驱动进行管理( Model A)
- HuaweiUSG firewall通过 HuaweiSDN controller进行管理( Mode B)
- F5 load balancing通过F5官方提供的OpenStack驱动进行管理( Model A)
在以上集成的基础上,我们开发 realize了网间互联的RI模块,用来完成对核心交换机的策略下发,最终 (computer) software驱动的架构如图10所示。
Figure 10 Prototype integrated driver implementation
Below we will detail the specific integration of each module.
(1) Integration of Cisco APIC and Huawei AC
OpenStack对接 Cisco Systems CompanyAPIC和 Huawei的AC SDN controller,需要在原生OpenStack的基础上,在控制节点、网络节点和计算节点上做配置的变更。
Cisco Systems CompanyACI | Huawei AC | |
---|---|---|
OpenStack version | Liberty | |
SDN controller version | APIC2.0 | AC2.0 |
暴露给OpenStack的网络模式 | 只有一种选项:OpFlex模式 | There are three options: VLAN, VXLAN, and GRE |
underlying network model | VLAN, VXLAN and GRE | VLAN, VXLAN and GRE |
VLAN/VNI/Tunnel ID assignment method | 通过SDN controller分配 | Allocation via OpenStack |
Control Node Configuration | -安装 companyAPIC API并激活-替换原生二层ML2驱动为 companyML2驱动-添加 companyL3驱动 | -安装 companyAC API并激活-替换原生二层ML2驱动为 companyML2驱动-添加 companyL3驱动 |
网络节点配置 | -安装原生DHCP服务,但 DHCP Driver替换成 company的-停止原生OVS代理,替换成 From the manufacturer的OVS代理-安装 From the manufacturerNeutron OpFlex代理-停止原生L3代理-停止原生metadata代理 | -Install native DHCP service, do not stop -Install native OVS agent, do not stop -No other vendor custom agent to install -Stop native L3 agent -Native metadata agent, do not stop |
Compute Node Configuration | -安装 From the manufacturer的OVS代理, 停止原生OVS代理-安装 From the manufacturerNeutron OpFlex代理 | -安装原生OVS代理 -无其他 company定制代理需要安装 |
DHCP Driver | From the manufacturer | OpenStack Native Drivers |
DHCP Local Proxy | realize | unrealized |
Metadata Local Agent | realize | unrealized |
分布式SNAT | realize | unrealized |
distributed gateway | realize | unrealized |
Strategy Issuance Logic | The Neutron OpFlex agent on the compute and network nodes synchronizes the virtual network resources in Neutron, generates the Endpoint file, notifies the agent-ovs to read it, and the agent-ovs goes to the leaf node via the OpFlex protocol to query the information of the corresponding EPG, and then completes the flow table down to the OVS via the OPENFLOW protocol. | Reusing the OpenStack policy despatch logic |
表4 Cisco Systems CompanyAPIC和 Huawei AC的整合技术 realize表
(2) F5 load balancing集成
在OpenStack LbaaSv2模型中, load balancing服务被抽象成loadbalancer,listener,pool,member,health monitor五种虚拟资源。一个loadbalancer下可以挂载多个listener,不同listener监听端口不同,一个listener下挂载一个pool,一个pool挂载members和health monitor。这些虚拟标准资源和F5资源的对应关系见表5。
# | OpenStack Abstraction Resources | F5 counterpart resources | 备注 |
---|---|---|---|
1 | Loadbalancer | PartitionRouter domainVlanSnat poolSelfip | Partition and router domain are used for resource isolation |
2 | Listener | Virtual serverProfile | |
3 | Pool | Pool | F5 will update the session-persistence of the virtual server when creating the pool, and mount the pool on the virtual server |
4 | Member | Node | F5 updates the members in the pool when the node is generated |
5 | Health monitor | Health monitor | The health monitor in the pool is updated when the health monitor is generated in F5. |
表5 F5 load balancingOpenStack资源映射表
LbaaSv2 Driver sends virtual load balancing resources in OpenStack to F5 agent through RPC. One F5 agent can control one F5 device, HA mode F5 cluster or Cluster mode F5 cluster, and the distributed deployment of F5 agent makes the horizontal scalability of F5 devices improved.
在和 Huawei AC集成的时候,F5的管理通过OpenStack进行调度,再通过AC中的L2BSR的方式接入到AC网络中。
When integrating with Cisco ACI, the management of F5 is scheduled through OpenStack, and then the F5 devices are connected to ACI through the Physical Domain method in ACI. Since ACI has changed the virtual network type to OpFlex type and the VLAN (or VNI) of this virtual network is assigned by APIC, this causes OpenStack Lbaas to fail when downlinking the configuration to F5 because of the missing VLAN or VNI information, which needs to be resolved by modifying the code of F5 to support the OpFlex type to support the downlinking, or by enabling the Global Route mode provided by F5.
(3) firewall集成
和 load balancing一样,OpenStack为 firewall服务提供了V1.0和V2.0两种API模型, FWaaS 2.0在社区M version中提出,现在仍在开发中,因此我们采用FWaaS 1.0 模型和 firewall设备进行集成。
同样的, firewall服务被抽象成多种虚拟资源,分别是firewall,policy和rule。一个firewall可以关联应用到多个router,一个firewall使用一个policy,policy是rule的集合。和ASA firewall的对应关系见表6。
# | OpenStack Abstraction Resources | Cisco Systems CompanyASA firewall对应资源 | 备注 |
---|---|---|---|
1 | Firewall | Security context | OpenStack中firewall的创建在ASA中对应生成security context,同时生成internal和external的三层子接口并加入security context来做租户隔离。 |
2 | Policy | Access list | |
3 | Rule | Access entry |
表6 Cisco Systems Company firewallOpenStack资源映射表
FWaaS Plugin将OpenStack中虚拟 firewall资源通过RPC的方式发送给ASA agent,一个ASA agent可以管控一台ASA设备、H Model AASA集群或者Cluster模式ASA集群,同时ASA agent的分布式部署方式使得ASA设备的横向扩展能力得到提升,通过FWaaS Plugin中的调度算法,可将OpenStack中虚拟 firewall资源合理的分配到不同的ASA设备当中。
(4) Network resources and OpenStack mapping model
The cloud management platform supports multiple financial institutions on the upper layer, each of which is a tenant. Each tenant corresponds to a project in OpenStack, encompassing multiple logical cloud network partitions and an isolated resource area consisting of RI. Each logical cloud network partition of a single tenant corresponds to a physical cloud network partition, and each physical cloud network partition supports logical cloud network partitions for multiple tenants (as shown in Figure 11).
Figure 11 Multi-tenant cloud network partitioning logical and physical resource mapping relationship diagram
In the integration with ACI, a logical cloud network partition corresponds to a tenant concept in ACI, an ACI tenant contains a VRF, and the traffic of the virtual network within a VRF is all-passed.
In the integration with AC, a logical cloud network partition corresponds to a VPC concept in AC, a VPC contains a VRF, and the traffic of the virtual network within a VRF is all-passed.
4. Effectiveness demonstration
The prototype platform creates two financial institution tenants, China UnionPay and Bank of Shanghai, based on multi-tenancy capabilities. The network addresses of the two tenants are completely isolated and multiplexed, and each tenant spans two cloud network partition resources from Huawei and Cisco, and jointly multiplexes all hardware resources to interoperate data through the core switching network. In Figure 12 in the extreme case, the deployment and address planning and turn-up of VM resources for the two institutional tenants are identical
Figure 12 Complete isolated multiplexing of multi-tenant network addresses
图13展示了原型平台金融机构租户管理员的管理服务界面,原型平台 realize了云数据中心虚拟路由、 load balancing、 firewall网络资源以及相关网络安全配置的自动化开通能力与统一视图展示。
Figure 13 Cross-zone networking view displayed in the Tenant Management Services interface
5. Problem identification and deficiencies in practice
基于当前银联生产技术的选型以及开放的研究角度,我们基于OpenStack开源技术进行了研究 realize,在 realize过程中不断汇总梳理了相应遇到的问题,一些影响核心应用的问题我们进行了个别优化解决,一些问题仍然有待解决,我们计划提交社区优化,具体如下:
(1) Two/three layer network model
The existing router model supports only one External exit to the Internet, and the model does not support setting up dual exits, but financial institution DMZ's often require two exits, one to the intranet and one to the extranet or Internet.
(2) firewall模型
Only IP or network segment is supported, not IP range, network segment range, etc. It is not possible to combine multiple address objects into one address group for use. It is not possible to combine multiple service objects into a single service group for use. These limitations can lead to a dramatic increase in the number of policies, reducing firewall efficiency and discouraging maintenance.
无法满足特定长连接应用的需求,导致应用异常。
传统 firewall支持的ALG功能,动态开放端口等,在遇到非标端口时需进行设置(例如FTP协议使用了2121端口情况),但在 firewall模型中无此概念。
(3) Load balancing model
Now holding single arm has load balancing model only branching mode, does not support double arm, multi-arm, etc.
无法满足特定长连接应用的需求,导致应用异常。
无法满足特定长连接应用的需求,导致应用异常。
Listener supports fewer types, fewer session-holding methods, fewer load-balancing algorithms, fewer health-checking methods
(4) Other aspects
如:IPS、DDOS网络安全、链路 load balancing等。
同时,我们通过研究后,我们也认为SDN Fabric controller以及其他网络设备的API能力,或其提供的管理 (computer) software驱动可以进一步加强,以便可以在 realize金融网络SDN能力时,可以更方便的获取相应细节参数以及加强管理的精细化力度。
Summary and outlook
After the above research and practice, the joint research team of China UnionPay and Bank of Shanghai believes that the application of SDN technology will be the future development trend of financial data center network, and the current technology accumulation, i.e. the future design of next-generation financial cloud network architecture based on SDN can drive the future production step-by-step application. At the same time, we also believe that the construction of financial cloud is a major, high complexity system engineering work of technology integration and innovation, industry collaborative innovation, financial institutions technology research and development should be based on the core of financial technology, focus on SDN and other technology applications of the financial institutions' special technical solutions, focus on industry cooperation and innovation. At present, we have plans and expectations to prepare a joint research working group for the financial industry in the field of cloud computing, to unite the research power of financial institutions horizontally, to unite all parties in the IT industry vertically, to actively influence the R&D direction of OpenStack and other open source communities, to jointly promote the breakthrough and application of key technologies of financial cloud, and to improve the universality and standardization of financial cloud technology. We very much welcome everyone, especially the partners of financial institutions, to participate in the joint research work initiated by us, and to give more valuable guidance, so that financial cloud technology can be networked and common worldwide.