对称性备份方案中主备两种方案所提供的带宽是相等的。备份设备或者备份链路同时也参与运营。需要考虑的是由于等值路由造成的报文路径不同,会导致的上层协议报文重组需要部分等待时间,从而造成效率下降的问题。解决方案是尽量选择等值路由情况下逐流转发的设备而非逐包转发的设备
非对称性备份
非对称性备份方案中备份链路提供较小或相等的带宽,只有在主链路故障时备份链路才会生效。
如果希望备份链路或备份设备也投入运行,可以通过策略路由或路由协议的规划使备份链路运行特定的部分业务流量。
对称性备份
对称性备份方案中主备两种方案所提供的带宽是相等的。备份设备或者备份链路同时也参与运营。需要考虑的是由于等值路由造成的报文路径不同,会导致的上层协议报文重组需要部分等待时间,从而造成效率下降的问题。解决方案是尽量选择等值路由情况下逐流转发的设备而非逐包转发的设备
非对称性备份
非对称性备份方案中备份链路提供较小或相等的带宽,只有在主链路故障时备份链路才会生效。
如果希望备份链路或备份设备也投入运行,可以通过策略路由或路由协议的规划使备份链路运行特定的部分业务流量。转载于:https://blog.51cto.com/flybear/277626
前言
这个Idea其实不是我想出来的。
实验室师兄参与了一个强化学习竞赛,让仿生人体学会站立行走乃至跑起来。在比赛的过程中他自己用tensorflow设计出了一个 对称性神经网络 ,能保证输出的 最终结果 具有 对称性(具体表现为 输出结果的数值分布 呈现 左右对齐)。
讨论
师兄问我,如果让我设计这个网络,该如何实现。
我想到的是,如果网络结构比较简单的话,保证 每一层的参数分布 左右对齐 就行了。只用设计一半数量的变量存储,让 对称位置 的参数 存储在同一个变量中 。在反向传播时,对称位置 的 参数变化 取平均结果,再进行偏移即可。
师兄说他的网络结构设计也是这样的,但是在反向传播时,累加 对称位置 的 参数变化,之后再进行偏移。
不过在我看来,区别只在于前方案的 learning_rate 是后方案的二分之一,并没有其他区别。
为什么出现了HTTPS
HTTP 有着一个致命的缺陷,那就是内容是明文传输的,没有经过任何加密,而这些明文数据会经过WiFi、路由器、运营商、机房等多个物理设备节点,如果在这中间任意一个节点被监听,传输的内容就会完全暴露,,这一攻击手法叫做MITM(Man In The Middle)中间人攻击。
HTTPS是什么
HTTPS其实就是将HTTP的数据包再通过SSL/TLS加密后传输
加密解密流程
- 用户在浏览器发起HTTPS请求(如 https://www.mogu.com/),默认使用服务端的443端口进行连接;
- HTTPS需要使用一套CA数字证书,证书内会附带一个公钥Pub,而与之对应的私钥Private保留在服务端不公开;
- 服务端收到请求,返回配置好的包含公钥Pub的证书给客户端;
- 客户端收到证书,校验合法性,主要包括是否在有效期内、证书的域名与请求的域名是否匹配,上一级证书是否有效(递归判断,直到判断到系统内置或浏览器配置好的根证书),如果不通过,则显示HTTPS警告信息,如果通过则继续;
- 客户端生成一个用于对称加密的随机Key,并用证书内的公钥Pub进行加密,发送给服务端;
- 服务端收到随机Key的密文,使用与公钥Pub配对的私钥Private进行解密,得到客户端真正想发送的随机Key;
- 服务端使用客户端发送过来的随机Key对要传输的HTTP数据进行对称加密,将密文返回客户端;
- 客户端使用随机Key对称解密密文,得到HTTP数据明文;
后续HTTPS请求使用之前交换好的随机Key进行对称加解密。
通俗地说:
- 服务器的公钥类似于一个只有服务器能打开带锁的盒子,用于加密;
服务器把打开的盒子传给客户端- 客户端把内容和加密方式放入盒子并锁住传给服务器;
- 中间即使经过别人的手,因为只有服务器有盒子的钥匙,所以不会被别人打开;
- 服务器收到盒子后,打开获得盒子中的内容及加密方式;
- 以后就可以直接用盒子中约定的加密方式交流,因为盒中的加密方式只有客户端和服务器两端知道,别人听不懂也看不懂。
文章目录
二分网络的投影和对称
# 查看数据 import pandas as pd data_trade_total = pd.read_csv('./data/comtrade_trade_data_total_2003.csv') print(len(data_trade_total)) # 44259 data_trade_total.sample(10)
# Special 'exclude_countiries' to be exluded when loading data 472 Africa CAMEU region, nes 899 Areas, nes 471 CACM, nes 129 Caribbean, nes 221 Eastern Europe, nes 97 EU-27 697 Europe EFTA, nes 492 Europe EU, nes 838 Free Zones 473 LAIA, nes 536 Neutral Zone 637 North America and Central America, nes 290 Northern Africa, nes 527 Oceania, nes 577 Other Africa, nes 490 Other Asia, nes 568 Other Europe, nes 636 Rest of America, nes 839 Special Categories 879 Western Asia, nes 0 World
网络的对称性
def net_symmetrisation(wtn_file, exclude_countries): DG = nx.DiGraph() Reporter_pos = 1 Flow_code_pos = 2 Partner_pos = 3 Value_pos = 9 dic_trade_flows = {} hfile = open(wtn_file, 'r') header = hfile.readline() lines = hfile.readlines() for l in lines: l_split = l.split(',') # the following is to prevent parsing lines without data if len(l_split) < 2: continue reporter = int(l_split[Reporter_pos]) partner = int(l_split[Partner_pos]) flow_code = int(l_split[Flow_code_pos]) value = float(l_split[Value_pos]) if ((reporter in exclude_countries) or (partner in exclude_countries) or \ (reporter == partner)): continue if flow_code == 1 and value > 0.0: # 1=Import, 2=Export if (partner, reporter, 2) in dic_trade_flows: DG[partner][reporter]['weight'] = (DG[partner][reporter]['weight'] + value) / 2.0 else: DG.add_edge(partner, reporter, weight=value) dic_trade_flows[(partner, reporter, 1)] = value elif flow_code == 2 and value > 0.0: # 1=Import, 2=Export if (reporter, partner, 1) in dic_trade_flows: DG[reporter][partner]['weight'] = (DG[reporter][partner]['weight'] + value) / 2.0 else: DG.add_edge(reporter, partner, weight=value) dic_trade_flows[( reporter, partner, 2)] = value else: print('trade flow not present\n') hfile.close() return DG
生成聚合网络
# importing the main modules import networkx as nx # countries to be excluded exclude_countries = [472,899,471,129,221,97,697,492,838,473,536,\ 637,290,527,577,490,568,636,839,879,0] # this is the magic command to have the graphic embedded in the notebook %pylab inline DG = net_symmetrisation('./data/comtrade_trade_data_total_2003.csv', exclude_countries) print('number of nodes: ', DG.number_of_nodes()) print('number of edges: ', DG.number_of_edges()) # number of nodes: 232 # number of edges: 27901
Reciprocity 互易性
在网络科学中,互易性是一种度量有向网络中顶点相互连接的可能性的方法。
在无权网络中定义网络互易性-wiki
where is the number of reciprocated links that for a connected network ammounts to# weighted case W = 0 W_rep =0 for n in DG.nodes(): for e in DG.out_edges(n, data=True): W += e[2]['weight'] if DG.has_edge(e[1], e[0]): W_rep += min(DG[e[0]][e[1]]['weight'], DG[e[1]][e[0]]['weight']) print(W, W_rep, W_rep/W) # 0.07841151578718941
In the
weighted case
the formula changes in:
where is the sum of the reciprocated weights with
and# weighted case W = 0 W_rep =0 for n in DG.nodes(): for e in DG.out_edges(n, data=True): W += e[2]['weight'] if DG.has_edge(e[1], e[0]): W_rep += min(DG[e[0]][e[1]]['weight'], DG[e[1]][e[0]]['weight']) print(W, W_rep, W_rep/W) # 7170872443378.5 5195628162988.0 0.7245461698019161
Assortativity 同配性
大
节点倾向于连接度大
节点,网络是度正相关的大
节点倾向于连接度小
节点,网络是度负相关的无关
,网络具有不同的度相关性# average degrees K_nn distribution
list_Knn=[]
for n in DG.nodes():
degree = 0.0
count = 0
for nn in DG.neighbors(n):
count += 1
degree = degree + DG.degree(nn)
temp = degree / len(list(DG.neighbors(n)))
list_Knn.append(temp)
#plot the histogram
hist(list_Knn,bins=12)
Pearson相关性系数(Pearson Correlation
)是衡量向量相似度的一种方法 。 输出范围为-1
到+1
, 0
代表无相关性,负值
为负相关,正值
为正相关。知识链接
#basic Pearson correlation coefficient for the
r1 = nx.degree_assortativity_coefficient(DG)
print(r1) # -0.33500264363818966
data_product = pd.read_csv('./data/comtrade_trade_data_2003_product_09.csv')
data_product.sample(10)
dic_product_netowrk={}
commodity_codes=['09','10','27','29','30','39','52','71','72','84','85','87','90','93']
for c in commodity_codes:
dic_product_netowrk[c]=net_symmetrisation("./data/comtrade_trade_data_2003_product_"+ c +".csv", \
exclude_countries)
DG_aggregate=net_symmetrisation( "./data/comtrade_trade_data_total_2003.csv",exclude_countries)
w_tot = 0
for u,v,d in DG_aggregate.edges(data=True): # data=True 边带权重值
w_tot += d['weight'] # 约等于数值1
for u,v,d in DG_aggregate.edges(data=True):
d['weight'] = d['weight'] / w_tot
for c in commodity_codes:
w_tot = 0.0
for u,v,d in dic_product_netowrk[c].edges(data=True):
w_tot += d['weight']
for u,v,d in dic_product_netowrk[c].edges(data=True):
d['weight'] = d['weight'] / w_tot
# 测试节点数
print(len(DG_aggregate.nodes())) # 232
DG_aggregate.number_of_nodes() # 232
density_aggregate = DG_aggregate.number_of_edges() / \
(DG_aggregate.number_of_nodes() * DG_aggregate.number_of_nodes() - 1.0)
w_agg = []
NS_in = []
NS_out = []
for u,v,d in DG_aggregate.edges(data=True):
w_agg.append(d['weight'])
for n in DG_aggregate.nodes():
if DG_aggregate.in_degree(n) > 0:
NS_in.append(DG_aggregate.in_degree(n, weight='weight') / DG_aggregate.in_degree(n))
if DG_aggregate.out_degree(n) > 0:
NS_out.append(DG_aggregate.out_degree(n, weight='weight') / DG_aggregate.out_degree(n))
for c in commodity_codes:
density_commodity = dic_product_netowrk[c].number_of_edges() / \
(dic_product_netowrk[c].number_of_nodes() * dic_product_netowrk[c].number_of_nodes() - 1.0)
w_c = []
NS_c_in = []
NS_c_out = []
for u,v,d in dic_product_netowrk[c].edges(data=True):
w_c.append(d['weight'])
for n in dic_product_netowrk[c].nodes():
if dic_product_netowrk[c].in_degree(n) > 0:
NS_c_in.append(dic_product_netowrk[c].in_degree(n, weight='weight') / dic_product_netowrk[c].in_degree(n))
if dic_product_netowrk[c].out_degree(n) > 0:
NS_c_out.append(dic_product_netowrk[c].out_degree(n, weight='weight') / dic_product_netowrk[c].out_degree(n))
print(c, str(round(density_commodity/density_aggregate, 4)) + ' & ' + str(round(mean(w_c)/mean(w_agg), 4)) \
+ ' & ' + str(round(mean(NS_c_in)/mean(NS_in),4)) + ' & ' + str(round(mean(NS_c_out)/mean(NS_out), 4)) )
# output
09 0.3089 & 3.3811 & 2.553 & 2.3906
10 0.1961 & 5.5195 & 5.9919 & 2.5718
27 0.3057 & 3.3575 & 2.6786 & 3.2979
29 0.3103 & 3.3664 & 2.3579 & 1.6286
30 0.3662 & 2.803 & 2.3308 & 1.267
39 0.4926 & 2.0478 & 1.753 & 1.1385
52 0.2864 & 3.5839 & 2.7572 & 2.1254
71 0.2843 & 3.6746 & 1.9479 & 2.6704
72 0.3081 & 3.3315 & 2.5847 & 1.8484
84 0.6195 & 1.6281 & 1.3359 & 1.0259
85 0.5963 & 1.6917 & 1.3518 & 1.0692
87 0.4465 & 2.259 & 1.7488 & 1.1105
90 0.4734 & 2.1492 & 1.5879 & 1.0993
93 0.1414 & 8.4677 & 6.0618 & 4.0279
It is necessary to weigh export of a good in relation to how much of the same product is produced worldwide, (i.e. ).
This must be compared with the importance of the export of single country, which is again a ratio between the total export ofc
(i.e. ) with respect to the global value of the exports for every country (i.e. ).
We consider countryc
to be a competitive exporter of productp
if itsRCA
is greater than some threshold value.
def RCA(c, p):
X_cp = dic_product_netowrk[p].out_degree(c, weight='weight')
X_c = DG_aggregate.out_degree(c, weight='weight')
X_p = 0.0
for n in dic_product_netowrk[p].nodes():
X_p += dic_product_netowrk[p].out_degree(n, weight='weight')
X_tot = 0.0
for n in DG_aggregate.nodes():
X_tot += DG_aggregate.out_degree(n, weight='weight')
RCA_cp = (X_cp/X_c) / (X_p/X_tot)
return RCA_cp
p = '93'
c = 381
print(RCA(c, p)) # 2.104705551640614
is the transposed matrix and the square matrices
C
andP
define the country–country network and the product–product network. The element defines the weight associated to the link between countries and in the country–country network. Analogously gives the weight of the link between products and in the product product network.
These weights have an interesting interpretation: if we write explicitly the expression of a generic element of the matrix according to (2.13), we have that . Therefore the element (since is a binary unweighted matrix) counts the number of products exported by both countries and . In a similar way the the element counts the number of countries which export both products and .
The diagonal elements and are respectively the number of products exported by the country and the number of exporters of the product .
import numpy as np
num_countries = DG_aggregate.number_of_nodes()
num_products = len(commodity_codes)
# generate array indices
country_index = {}
i = 0
for c in DG_aggregate.nodes():
country_index[c] = i
i += 1
M = np.zeros((num_countries, num_products))
for pos_p, p in enumerate(commodity_codes):
for c in dic_product_netowrk[p].nodes():
if RCA(c, p) > 1.0: # 假定阈值为1
M[country_index[c]][pos_p] = 1.0
print('\n')
C = np.dot(M, M.transpose())
P = np.dot(M.transpose(), M)
print(C)
print(P)
[[1. 0. 0. ... 1. 0. 0.]
[0. 3. 1. ... 0. 0. 0.]
[0. 1. 3. ... 0. 1. 0.]
...
[1. 0. 0. ... 1. 0. 0.]
[0. 0. 1. ... 0. 2. 0.]
[0. 0. 0. ... 0. 0. 0.]]
[[83. 27. 28. 4. 6. 6. 29. 31. 20. 1. 3. 3. 5. 12.]
[27. 59. 19. 4. 4. 8. 27. 18. 19. 5. 3. 7. 3. 12.]
[28. 19. 71. 4. 2. 7. 20. 16. 14. 3. 4. 4. 1. 9.]
[ 4. 4. 4. 20. 9. 9. 2. 6. 5. 5. 4. 3. 7. 7.]
[ 6. 4. 2. 9. 27. 15. 7. 6. 10. 9. 3. 8. 9. 10.]
[ 6. 8. 7. 9. 15. 37. 10. 7. 15. 10. 10. 8. 9. 11.]
[29. 27. 20. 2. 7. 10. 69. 19. 18. 4. 5. 7. 5. 14.]
[31. 18. 16. 6. 6. 7. 19. 57. 10. 4. 3. 4. 6. 9.]
[20. 19. 14. 5. 10. 15. 18. 10. 56. 7. 7. 12. 2. 15.]
[ 1. 5. 3. 5. 9. 10. 4. 4. 7. 26. 12. 9. 7. 6.]
[ 3. 3. 4. 4. 3. 10. 5. 3. 7. 12. 26. 5. 8. 6.]
[ 3. 7. 4. 3. 8. 8. 7. 4. 12. 9. 5. 27. 5. 11.]
[ 5. 3. 1. 7. 9. 9. 5. 6. 2. 7. 8. 5. 20. 5.]
[12. 12. 9. 7. 10. 11. 14. 9. 15. 6. 6. 11. 5. 38.]]