精华内容
下载资源
问答
  • mininet

    2021-03-11 16:31:12
    Mininet mn -c 清除配置信息(当出错的时候可能配置资源未释放) sudo mn --custom file.py --topo mytopo file.py 在 /mininet/custom 目录下,如果不是,则就要给出绝对路径 mininet命令 网络构建参数 –topo –...

    Mininet
    mn -c 清除配置信息(当出错的时候可能配置资源未释放)
    sudo mn --custom file.py --topo mytopo
    file.py 在 /mininet/custom 目录下,如果不是,则就要给出绝对路径

    mininet命令

    网络构建参数

    –topo

    –custom

    –switch
    定义使用的ovs,默认OVSK(OpenVSwitch)

    –controller
    sudo mn --controller=remote,–ip=[controller ip],–port=[port] (6653/6633)

    –mac 自动设置设备的mac,
    让MAC及IP地址从小到大排序,使其易读性更高

    内部交互命令

    在这里插入图片描述
    net
    nodes
    links
    pingall
    在这里插入图片描述

    py 拓展拓扑

    在这里插入图片描述

    查看py帮助信息

    py help(s1)
    py help(h1)
    py dir (s1) 也可以看可以使用哪些函数
    

    在这里插入图片描述

    py 修改host ip

    在这里插入图片描述

    mininet可视化

    /mininet/examples/miniedit.py

    异常Exception: Error creating interface pair (s11-eth2,s12-eth2): RTNETLINK answers: File exists
    解决:mn -c 清除配置信息

    展开全文
  • Mininet

    2017-12-11 21:25:00
    在Coursera SDN开放课程中,编程作业要用Mininet来完成。这里对Mininet做一个简单的介绍。 什么是Mininet Mininet是由一些虚拟的终端节点(end-hosts)、交换机、路由器连接而成的一个网络仿真器,它采用轻量级的...

    在Coursera SDN开放课程中,编程作业要用Mininet来完成。这里对Mininet做一个简单的介绍。 

    什么是Mininet

           Mininet是由一些虚拟的终端节点(end-hosts)、交换机、路由器连接而成的一个网络仿真器,它采用轻量级的虚拟化技术使得系统可以和真实网络相媲美。

           Mininet可以很方便地创建一个支持SDN的网络:host就像真实的电脑一样工作,可以使用ssh登录,启动应用程序,程序可以向以太网端口发送数据包,数据包会被交换机、路由器接收并处理。有了这个网络,就可以灵活地为网络添加新的功能并进行相关测试,然后轻松部署到真实的硬件环境中。

    Mininet的特性

           可以简单、迅速地创建一个支持用户自定义的网络拓扑,缩短开发测试周期

           可以运行真实的程序,在Linux上运行的程序基本上可以都可以在Mininet上运行,如Wireshark

           Mininet支持Openflow,在Mininet上运行的代码可以轻松移植到支持OpenFlow的硬件设备上

           Mininet可以在自己的电脑,或服务器,或虚拟机,或者云(例如Amazon EC2)上运行

           Mininet提供python API,简单易用

           Mininet是开源项目,源代码在这里:https://github.com/mininet

           ……

    Mininet安装

           使用VirtualBox安装Mininet虚拟机:http://mininet.org/download/

    使用Mininet创建一个网络

      以Coursera SDN Week3 programming assignment为例,创建一个及其简单的数据中心网络。

      Data center networks typically have a tree-like topology. End-hosts connect to top-of-rack switches, which form the leaves (edges) of the tree; one or more core switches form the root; and one or more layers of aggregation switches form the middle of the tree. In a basic tree topology, each switch (except the core switch) has a single parent switch. Additional switches and links may be added to construct more complex tree topologies (e.g., fat tree) in an effort to improve fault tolerance or increase inter-rack bandwidth.

      In this assignment, your task is to create a simple tree topology. You will assume each level i.e., core, aggregation, edge and host to be composed of a single layer of switches/hosts with a configurable fanout value (k) looks like: 

      代码:

    复制代码
    # CustomTopo.py
    '''
    Coursera:
    - Software Defined Networking (SDN) course
    -- Module 3 Programming Assignment
    
    Professor: Nick Feamster
    Teaching Assistant: Muhammad Shahbaz
    '''
    
    from mininet.topo import Topo
    from mininet.net import Mininet
    from mininet.node import CPULimitedHost
    from mininet.link import TCLink
    from mininet.util import irange,dumpNodeConnections
    from mininet.log import setLogLevel
    
    class CustomTopo(Topo):
        "Simple Data Center Topology"
    
        "linkopts - (1:c1, 2:aggregation, 3: edge) parameters"
        "fanout - number of child switch per parent switch"
        def __init__(self, linkopts1, linkopts2, linkopts3, fanout=2, **opts):
            # Initialize topology and default options
            Topo.__init__(self, **opts)
                            
            # Add your logic here ...
            self.fanout = fanout
            core = self.addSwitch('c1')
            for i in irange(1, fanout):
                aggregation = self.addSwitch('a%s' %i)
                self.addLink(core, aggregation, **linkopts1)
                for j in irange(1, fanout):
                    edge = self.addSwitch('e%s' %(fanout*(i-1)+j))
                    self.addLink(aggregation, edge, **linkopts2)
                    for k in irange(1, fanout):
                        host = self.addHost('h%s' %((fanout*(fanout*(i-1)+j-1))+k))
                        self.addLink(edge, host, **linkopts3)
                       
    topos = { 'custom': ( lambda: CustomTopo() ) }
    
    def simpleTest():
        "Create and test a simple network"
        linkopts1 = dict(bw=10, delay='3ms', use_htb=True)
        linkopts2 = dict(bw=8, delay='4ms', loss=1, max_queue_size=900, )
        linkopts3 = dict(bw=6, delay='5ms', loss=1, max_queue_size=800)
        topo = CustomTopo(linkopts1, linkopts2, linkopts3, fanout=2)
        net = Mininet(topo, host=CPULimitedHost, link=TCLink)
        net.start()
        print "Dumping host connections"
        dumpNodeConnections(net.hosts)
        print "Testing network connectivity"
        net.pingAll()
        net.stop()
    
    if __name__ == '__main__':
       # Tell mininet to print useful information
       setLogLevel('info')
       simpleTest()
    复制代码

      在mininet虚拟机上执行下面操作即可创建自定义的网络拓扑。函数simpleTest()创建网络并进行了简单的ping测试,从屏幕输出可以看到创建的过程。

        mininet@mininet-vm:~/mininet$ sudo python CustomTopo.py

     

    更多资料

    1. Mininet: http://mininet.org/

    2. Mininet wiki: https://github.com/mininet/mininet/wiki

     

    转载于:https://www.cnblogs.com/wangprince2017/p/8025095.html

    展开全文
  • Mininet: Rapid Prototyping for Software Defined Networks The best way to emulate almost any network on your laptop! Mininet 2.3.0 What is Mininet? Mininet emulates a complete network of hosts, links...
  • RYU+mininet——mininet

    千次阅读 2019-05-08 22:12:24
    mininet基本操作 Mininet是一个网络仿真器,Mininet是一个网络仿真器,它在单个Linux内核上运行一组终端主机,交换机,路由器和链接。它使用轻量级虚拟化使单个系统看起来像一个完整的网络,运行相同的内核,系统和...

    1. mininet基本操作

    Mininet是一个网络仿真器,它在单个Linux内核上运行一组终端主机,交换机,路由器和链接。它使用轻量级虚拟化使单个系统看起来像一个完整的网络,运行相同的内核,系统和用户代码。 Mininet主机的行为就像真机一样;你可以ssh到它并运行任意程序。你运行的程序可以通过看似真正的以太网接口发送数据包,具有给定的链接速度和延迟。数据包会通过一定数量的真实以太网交换机,路由器或中间盒进行处理。详见mininet

    1.1 mininet命令(mininet CLI环境)

    在终端输入sudo mn,打开mininet进入CLI交互窗口。

    • 基本命令:

    在CLI环境下输入help便可以获取可执行命令的信息,在2.3版本中有以下28个命令:

    EOF    gterm  iperfudp  nodes        pingpair      py      switch
    dpctl  help   link      noecho       pingpairfull  quit    time  
    dump   intfs  links     pingall      ports         sh      x     
    exit   iperf  net       pingallfull  px            source  xterm
    

    输入:help command便可以获取该命令的详细帮助信息。下面介绍几组常用命令:

    mininet>nodes                查看全部节点信息
    mininet>net                  查看链路信息
    mininet>dump                 查看各节点详细信息
    mininet>pingall              测试所有结点是否连通
    mininet>pingpair	         两个主机将互 ping
    
    mininet>link s1 h2 up/down   启用/禁用s1跟h2之间的链路
    mininet>links                报告所有链路状态
    mininet>iperf h1 h2	         两个节点之间用指定简单的 TCP 测试
    mininet>iperfudp 10M h1 h2   两个节点之间用指定简单 udp 进行测试,10M指自己设置的带宽
    mininet>time [command]	     测量命令所执行的时间
    
    mininet>xterm/gterm s1       打开某结点控制终端
    mininet>sh [cmd args]        运行外部 shell 命令
    mininet>px/py	             执行 python 语句
    mininet>source <file>        从输入文件读入命令
               
    mininet>exit/quit/EOF	     退出 mininet 命令行
    
    • 在节点执行系统命令:

    前面提到mininet采用轻量级的虚拟化技术,使得其模拟的每台主机和交换机都是独立的,所以可以像在真实主机的终端中执行命令一样,在模拟的主机或交换机上执行任何系统命令。在CLI环境中执行的格式为:node command,command 格式和用法同Linux主机,如:

    mininet>h1 ifconfig                                             查看h1节点网络信息
    mininet>h1 ping -c 4 h2                                         实现两主机互连测试
    mininet>h1 ifconfig h1-eth0 10.108.126.3 netmask 255.255.255.0  修改虚拟的主机的ip以及mask地址
    

    还可以运行python脚本,比如在主机建立web服务器,并获取HTTP请求:

    mininet> h1 python -m SimpleHTTPServer 80 &     #在主机 h1 开启 Web 服务
    mininet> h2 wget -O - h1                        #在主机 h2 获取网页内容
    

    这种用法和运行xterm node为节点打开单独的终端,在单独终端中运行系统命令一样。

    1.1 dpctl 流表操作:

    dpctl 和 ovs-ofctl 都是命令行的OpenFlow交换机管理工具,可以用来操作和管理流表。在CLI中管理流表的用法如下:

    mininet>dpctl command [arg1] [arg2]        在所有交换机上运行 dpctl 或 ovs-ofctl 命令
    

    1.2 mininet可视化界面

    2.2.0以后版本的mininet支持可视化,在/home/mininet/mininet/examples目录下提供miniedit.py脚本,切换到相应目录下,在终端中执行:

    sudo python miniedit.py
    

    便会弹出Mininet的可视化界面,如下图所示。可以很方便的在界面上可进行自定义拓扑和自定义设置,还支持将创建的拓扑保存成脚本,也支持从脚本打开拓扑,十分方便。具体可以参考Mininet可视化应用
    在这里插入图片描述

    1.3 mn启动参数,格式mn [options]

    和其他Linux命令一样,以下参数可以随意组合跟在mn后面,但要注意每个参数格式的区别,有些参数面的值是用等号连接的,但有些是用空格连接【。。。】,使用的时候需要注意。

    • 自动设置MAC地址和ARP条目
    --mac       自动设置MAC地址,MAC地址与IP地址的最后一个字节相同
    --arp       为每个主机设置静态ARP表,存储同一网段的主机的mac和IP
    
    • 设置交换机
    --switch default|ivs|lxbr|ovs|ovsbr|ovsk|user[,param=value...]   
        其中,ovs,defaul,ovsk都为OVS(openvswitch)交换机,lxbr=LinuxBridge user=UserSwitch ivs=IVSSwitch ovsbr=OVSBridge
    
    • 设置控制器
    --controller=default|none|nox|ovsc|ref|remote|ryu[,param=value...]
                           其中, ovsc=OVSController none=NullController
                            remote=RemoteController default=DefaultController
                            nox=NOX ryu=Ryu ref=Controller
    --controller=remote,ip=[controller IP] ,port=[controller listening port]    设置远程控制器
    

    比如:设定启动支持openflow1.3的交换机和IP为10.108.125.9的远程交换机:

    sudo mn --controller=remote,ip=10.108.125.9,port=6653 --switch ovsk,protocols=OpenFlow13
    
    • 定义网络拓扑
    --topo single,n                 单交换机,星型拓扑, n表示主机数
    
    --topo linear,n                 线性拓仆,n表示n个交换机直线连接
    
    --topo tree,depth=a,fanout=b    树状拓仆,depth表示树深度,fanout表示每个结点有几个子结点。所有叶子结点都为主机,非叶子结点为交换机
    
    --topo minimal|reversed|torus[,param=value]       MinimalTopo,SingleSwitchReversedTopo,TorusTopo
    
    --custom ~/mininet/custom/mytopo.py --topo=mytopo  用户自定义拓扑(mytopo.py是自己定义拓扑的python文件)
    

    注: 一般在实际应用的时候,我们并不会通过上述两种方式创建网络拓扑,而是直接创建python脚本,在脚本中导入相关的模块,调用mininet的功能,python解释器直接执行。这样方便结合其他项目进行开发。具体可以参考mininet搭建自定义网络拓扑。需要注意的是在topo类和mininet类里面都有addhost,addswitch,addlink函数,但定义并不相同,具体参考源文件。

    • 为所有主机,包括交换机和主机、控制器,在运行时打开各自的xterm
    mn –x       //作用同在cli环境中运行xterm
    
    • 退出并且清理
    mn –c
    

    注: 这里要注意的是经常在退出mininet后,再次进入会发现报如下错误:
    Exception: Error creating interface pair (s5-eth1,s1-eth3): RTNETLINK answers: File exists
    这是由于进程未被完全杀死造成的,所以在退出mininet环境后进行清理的时候,需要使用sudo权限:
    sudo mn –c

    • 其他可用的mn命令
      --host=HOST           cfs|proc|rt[,param=value...]
                            rt=CPULimitedHost{'sched': 'rt'} 
                            proc=Host
                            cfs=CPULimitedHost{'sched': 'cfs'}
      --link=LINK           default|ovs|tc|tcu[,param=value...] 
                            default=Link ovs=OVSLink tcu=TCULink tc=TCLink
      --custom=CUSTOM       read custom classes or params from .py file(s)
      --test=TEST           none|build|all|iperf|pingpair|iperfudp|pingall
      -i IPBASE, --ipbase=IPBASE
                            base IP address for hosts
      -v VERBOSITY, --verbosity=VERBOSITY
                            info|warning|critical|error|debug|output
      --innamespace         sw and ctrl in namespace?
      --listenport=LISTENPORT
                            base port for passive switch listening
      --nolistenport        don't use passive listening port
      --pre=PRE             CLI script to run before tests
      --post=POST           CLI script to run after tests
      --pin                 pin hosts to CPU cores (requires --host cfs or --host rt)
      --nat                 [option=val...] adds a NAT to the topology that connects Mininet hosts to the physical network.
                            Warning: This may route any traffic on the machine that uses Mininet's IP subnet into the 
                            Mininet network. If you need to change Mininet's IP subnet, see the --ipbase option.
      --version             prints the version and exits
      --cluster=server1,server2...
                            run on multiple servers (experimental!)
      --placement=block|random
                            node placement for --cluster (experimental!)
    

    .

    2. 流表操作

    2.1 使用命令查看交换机中的流表

    • 查看流表项:
    mininet>dpctl dump-flows
    

    在一个depth=2,fanout=2的树形拓扑中,执行h1 ping -c1 h2,得到流表项如下:

    mininet> dpctl dump-flows
    *** s1 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
    *** s2 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
     cookie=0x0, duration=4.087s, table=0, n_packets=1, n_bytes=42, idle_timeout=60, idle_age=4, priority=65535,arp,in_port=2,vlan_tci=0x0000,dl_src=06:7d:8b:e3:9f:b5,dl_dst=22:7e:4a:a6:25:c7,arp_spa=10.0.0.2,arp_tpa=10.0.0.1,arp_op=2 actions=output:1
     cookie=0x0, duration=4.086s, table=0, n_packets=1, n_bytes=98, idle_timeout=60, idle_age=4, priority=65535,icmp,in_port=1,vlan_tci=0x0000,dl_src=22:7e:4a:a6:25:c7,dl_dst=06:7d:8b:e3:9f:b5,nw_src=10.0.0.1,nw_dst=10.0.0.2,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:2
     cookie=0x0, duration=4.085s, table=0, n_packets=1, n_bytes=98, idle_timeout=60, idle_age=4, priority=65535,icmp,in_port=2,vlan_tci=0x0000,dl_src=06:7d:8b:e3:9f:b5,dl_dst=22:7e:4a:a6:25:c7,nw_src=10.0.0.2,nw_dst=10.0.0.1,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:1
    *** s3 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
    mininet> 
    
    • 添加流表项:
    mininet>dpctl add-flow in_port=2,actions=output:1  
    mininet>dpctl add-flow in_port=1,actions=output:2
    

    在所有的交换机中添加流表,让从1号端口进入的报文都从2号端口转发,添加后流表项如下(在操作之前先清空交换机中的流表):

    mininet> dpctl add-flow in_port=1,actions=output:2
    *** s1 ------------------------------------------------------------------------
    *** s2 ------------------------------------------------------------------------
    *** s3 ------------------------------------------------------------------------
    mininet> dpctl dump-flows
    *** s1 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
     cookie=0x0, duration=3.454s, table=0, n_packets=0, n_bytes=0, idle_age=3, in_port=1 actions=output:2
    *** s2 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
     cookie=0x0, duration=3.454s, table=0, n_packets=0, n_bytes=0, idle_age=3, in_port=1 actions=output:2
    *** s3 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
     cookie=0x0, duration=3.455s, table=0, n_packets=0, n_bytes=0, idle_age=3, in_port=1 actions=output:2
    

    流表匹配的报文数n_packets都为0,这时候通过h1 ping h2,再查看流表项:

    mininet> h1 ping -c1 h2
    PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
    64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.79 ms
    
    --- 10.0.0.2 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 1.793/1.793/1.793/0.000 ms
    mininet> dpctl dump-flows
    *** s1 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
     cookie=0x0, duration=35.967s, table=0, n_packets=2, n_bytes=140, idle_age=0, in_port=1 actions=output:2
    *** s2 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
     cookie=0x0, duration=35.967s, table=0, n_packets=2, n_bytes=140, idle_age=0, in_port=1 actions=output:2
    *** s3 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
     cookie=0x0, duration=35.968s, table=0, n_packets=0, n_bytes=0, idle_age=35, in_port=1 actions=output:2
    

    这时候直接可以ping通,发现s2中的流表匹配到两个报文(一个arp请求,一个icmp请求),但通过h1 ping h3,却发现不通,因为从1端口的报文都会无条件匹配到2号端口:

    mininet> h1 ping -c1 h3
    PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
    From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
    
    --- 10.0.0.3 ping statistics ---
    1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
    
    • 删除表项:
    mininet>dpctl del-flows
    

    删除后再查看,流表为空:

    mininet> dpctl del-flows
    *** s1 ------------------------------------------------------------------------
    *** s2 ------------------------------------------------------------------------
    *** s3 ------------------------------------------------------------------------
    mininet> dpctl dump-flows
    *** s1 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
    *** s2 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
    *** s3 ------------------------------------------------------------------------
    NXST_FLOW reply (xid=0x4):
    

    这种流表的操作方式十分不方便,也很少有人这样使用。一般操作流表都使用ovs-ofctl,ofctl等工具在终端中操作或通过控制器环境操作。具体见下一节。

    2.2 使用命令查看交换机中的流表

    ovs-ofctl是命令行下的交换机管理工具,也可以在终端中用来管理openflow 流表。

    查看交换机中的流表项:

    sudo ovs-ofctl dump-flows -O openflow13 s1   # -O参数后面跟协议,s1表示交换机的id。
    ovs-ofctl dump-flows br-sw
    

    当然,也可在mininet的命令行窗口使用sh命令来直接调用上述指令:

    mininet> sh ovs-ofctl dump-flows -O openflow13 s1   # 效果同在终端中执行
    

    执行如下命令,增加一条流表项,将主机1发给主机2的数据包丢弃:

    ovs-ofctl add-flow br-sw ‘dl_type=0x0800,nw_src=10.0.0.7,nw_dst=10.0.0.11, priority=27,table=0,actions=drop’
    

    该流表项的匹配字段包括:dl_type=0x0800(MAC帧上一层为IP协议)、nw_src=10.0.0.7(源IP地址为10.0.0.7)、nw_dst=10.0.0.11(目的IP地址为10.0.0.11);优先级priority设为27,高于其他流表,故优先执行;table id为0,即将该流表项下发到table 0中。该流表项表示:从主机10.0.0.7发往主机10.0.0.11的IP包将被抛弃。

    其他流表操作
    1.列出br-int网桥的接口
    ovs-ofctl -O OpenFlow13 show br-int

    2.列出br-int网桥的接口
    ovs-ofctl dump-ports -O OpenFlow13 br-int

    3.列出br-int网桥的某个接口的详细信息
    ovs-ofctl dump-ports -O OpenFlow13 br-int 1

    4.查看 Open vSwitch 中的端口信息
    ovs-ofctl show -O OpenFlow13 br-int

    5.获得网络接口的 OpenFlow 编号
    ovs-vsctl get Interface tap8f178fef-10 ofport

    6.查看网桥下的流表
    ovs-ofctl dump-flows -O OpenFlow13 br-int

    7.查看ovs下的 datapath 的信息
    ovs-dpctl show

    8.根据流量显示在流表中的走向
    ovs-appctl ofproto/trace br-int in_port=2 | grep “Rule|action”

    9.ovs设置控制器
    ovs-vsctl set-controller br0 tcp:1.2.3.4:663

    10.ovs添加流表
    ovs-ofctl add-flow br0 in_port=1,actions=output:2

    11.删除网桥中所有的流表
    ovs-ofctl del-flows br0

    12.删除根据匹配项删除网桥中的流表
    ovs-ofctl del-flows br0 “in_port=1”

    流表

    在传统网络设备中,交换机和路由器的数据转发需要依赖设备中保存的二层MAC地址转发表或者三层IP地址路由表,而OpenFlow交换机中使用的流表也是如此,不过在它的表项中整合了网络中各个层次的网络配置信息,从而在进行数据转发时可以使用更丰富的规则。

    OpenFlow v1.3中流表项主要由7部分组成,分别是匹配域(用来识别该条表项对应的flow)、优先级(定义流表项的优先顺序)、计数器(用于保存与条目相关统计信息),指令(匹配表项后需要对数据分组执行的动作)、超时:最大时间计数值或流在交换机中失效之前的剩余时间、Cookie:由控制器选择的不透明数据值、Flags。
    匹配字段:对数据包匹配。包括入端口和数据包头,以及由前一个表指定的可选的元数据。流表项通过匹配字段和优先级决定,在一个流表中匹配字段和优先级共同确定唯一的流表项。

    展开全文
  • mininet.zip

    2020-01-06 20:55:06
    mininet.zip ubuntu安装源码,进入mininet/util后,sudo ./install -a 即可安装
  • mininet topology

    2018-08-06 22:30:31
    3 hosts topology 3h.py #!/usr/bin/python from mininet.topo import ...from mininet.net import Mininet from mininet.cli import CLI import time # h1---h2--h3 net = Mininet( cleanup=True ) h1 = net....

    pre requisite

    All script should enble ip_forward.

    echo 1 > /proc/sys/net/ipv4/ip_forward
    

    2 hosts and 4 routers topology

     This code is refered from[2]. From this example, I know the difference between “ip route add” and “route add”. [1]
    2h4s.py

    #!/usr/bin/python
    from mininet.topo import Topo
    from mininet.net import Mininet
    from mininet.cli import CLI
    import time
    #   0   1   2   3   4
    #h1--r1--r2--r3--r4--h2
    # 10.0.X.0
    def rp_disable(host):
        ifaces = host.cmd('ls /proc/sys/net/ipv4/conf')
        ifacelist = ifaces.split()    # default is to split on whitespace
        for iface in ifacelist:
           if iface != 'lo': host.cmd('sysctl net.ipv4.conf.' + iface + '.rp_filter=0')
    
    net = Mininet( cleanup=True )
    h1 = net.addHost('h1',ip='10.0.0.1')
    r1 = net.addHost('r1',ip='10.0.0.2')
    r2 = net.addHost('r2',ip='10.0.1.2')
    r3 = net.addHost('r3',ip='10.0.2.2')
    r4 = net.addHost('r4',ip='10.0.3.2')
    h2 = net.addHost('h2',ip='10.0.4.2')
    c0 = net.addController( 'c0' )
    
    net.addLink(h1,r1,intfName1='h1-eth0',intfName2='r1-eth0')
    net.addLink(r1,r2,intfName1='r1-eth1',intfName2='r2-eth0')
    net.addLink(r2,r3,intfName1='r2-eth1',intfName2='r3-eth0')
    net.addLink(r3,r4,intfName1='r3-eth1',intfName2='r4-eth0')
    net.addLink(r4,h2,intfName1='r4-eth1',intfName2='h2-eth0')
    net.build()
    
    h1.setIP('10.0.0.1', intf='h1-eth0')
    h1.cmd("ifconfig h1-eth0 10.0.0.1 netmask 255.255.255.0")
    h1.cmd("route add default gw 10.0.0.2 dev h1-eth0")
    
    r1.cmd("ifconfig r1-eth0 10.0.0.2/24")
    r1.cmd("ifconfig r1-eth1 10.0.1.1/24")
    r1.cmd("ip route add to 10.0.4.0/24 via 10.0.1.2")
    r2.cmd("ip route add to 10.0.0.0/24 via 10.0.0.1")
    r1.cmd('sysctl net.ipv4.ip_forward=1')
    rp_disable(r1)
    
    r2.cmd("ifconfig r2-eth0 10.0.1.2/24")
    r2.cmd("ifconfig r2-eth1 10.0.2.1/24")
    r2.cmd("ip route add to 10.0.4.0/24 via 10.0.2.2")
    r2.cmd("ip route add to 10.0.0.0/24 via 10.0.1.1")
    r2.cmd('sysctl net.ipv4.ip_forward=1')
    rp_disable(r2)
    
    r3.cmd("ifconfig r3-eth0 10.0.2.2/24")
    r3.cmd("ifconfig r3-eth1 10.0.3.1/24")
    r3.cmd("ip route add to 10.0.4.0/24 via 10.0.3.2")
    r3.cmd("ip route add to 10.0.0.0/24 via 10.0.2.1")
    r3.cmd('sysctl net.ipv4.ip_forward=1')
    rp_disable(r3)
    
    r4.cmd("ifconfig r4-eth0 10.0.3.2/24")
    r4.cmd("ifconfig r4-eth1 10.0.4.1/24")
    r4.cmd("ip route add to 10.0.4.0/24 via 10.0.4.2")
    r4.cmd("ip route add to 10.0.0.0/24 via 10.0.3.1")
    r4.cmd('sysctl net.ipv4.ip_forward=1')
    rp_disable(r4)
    
    h2.setIP('10.0.4.2', intf='h2-eth0')
    h2.cmd("ifconfig h2-eth0 10.0.4.2 netmask 255.255.255.0")
    h2.cmd("route add default gw 10.0.4.1")
    
    net.start()
    time.sleep(1)
    CLI(net)
    net.stop()
    

    [1]ip route 和 route命令的区别
    [2]routeline topology
    [3]An intruction to computer networks

    star topology

    #!/usr/bin/python
    from mininet.topo import Topo
    from mininet.net import Mininet
    from mininet.cli import CLI
    import time
    #        h1
    #         |1      10.0.x.0
    #        r1
    #       /2   \3   
    #      h2   h3
    #
    net = Mininet( cleanup=True )
    h1 = net.addHost('h1',ip='10.0.1.1')
    h2 = net.addHost('h2',ip='10.0.2.2')
    h3 = net.addHost('h3',ip='10.0.3.2')
    r1 = net.addHost('r1',ip='10.0.1.2')
    net.addLink(h1,r1,intfName1='h1-eth0',intfName2='r1-eth0')
    net.addLink(h2,r1,intfName1='h2-eth0',intfName2='r1-eth1')
    net.addLink(h3,r1,intfName1='h3-eth0',intfName2='r1-eth2')
    net.build()
    
    h1.cmd("ifconfig h1-eth0 10.0.1.1/24")
    h1.cmd("route add default gw 10.0.1.2")
    
    h2.cmd("ifconfig h2-eth0 10.0.2.2/24")
    h2.cmd("route add default gw 10.0.2.1")
    
    h3.cmd("ifconfig h3-eth0 10.0.3.2/24")
    h3.cmd("route add default gw 10.0.3.1")
    
    r1.cmd("ifconfig r1-eth0 10.0.1.2/24")
    r1.cmd("ifconfig r1-eth1 10.0.2.1/24")
    r1.cmd("ifconfig r1-eth2 10.0.3.1/24")
    
    r1.cmd("ip route add to 10.0.1.0/24 via 10.0.1.1")
    r1.cmd("ip route add to 10.0.2.0/24 via 10.0.2.2")
    r1.cmd("ip route add to 10.0.3.0/24 via 10.0.3.2")
    r1.cmd('sysctl net.ipv4.ip_forward=1')
    net.start()
    time.sleep(1)
    CLI(net)
    net.stop()
    

    mininet multi-interface ring topology

     This trivial cost me nearly one afternoon and a night.
     Thanks this blog[1].
     The tips to use:

    1 sudo su
    2 python 4h.py
    3 xterm h1 h3
    4 ping X.X.X.X
    

    4h.py

    #!/usr/bin/python
    from mininet.topo import Topo
    from mininet.net import Mininet
    from mininet.cli import CLI
    import time
    ##https://serverfault.com/questions/417885/configure-gateway-for-two-nics-through-static-routeing
    #    ____h2____
    #   /          \
    # h1           h3
    #   \___h4_____/
    #  
    net = Mininet( cleanup=True )
    h1 = net.addHost('h1',ip='10.0.1.1')
    h2 = net.addHost('h2',ip='10.0.1.2')
    h3 = net.addHost('h3',ip='10.0.2.2')
    h4 = net.addHost('h4',ip='10.0.3.2')
    c0 = net.addController( 'c0' )
    net.addLink(h1,h2,intfName1='h1-eth0',intfName2='h2-eth0')
    net.addLink(h2,h3,intfName1='h2-eth1',intfName2='h3-eth0')
    net.addLink(h1,h4,intfName1='h1-eth1',intfName2='h4-eth0')
    net.addLink(h4,h3,intfName1='h4-eth1',intfName2='h3-eth1')
    net.build()
    h1.setIP('10.0.1.1', intf='h1-eth0')
    h1.cmd("ifconfig h1-eth0 10.0.1.1 netmask 255.255.255.0")
    
    h1.setIP('10.0.3.1', intf='h1-eth1')
    h1.cmd("ifconfig h1-eth1 10.0.3.1 netmask 255.255.255.0")
    
    h1.cmd("ip route flush all proto static scope global")
    h1.cmd("ip route add 10.0.1.1/24 dev h1-eth0 table 5000")
    h1.cmd("ip route add default via 10.0.1.2 dev h1-eth0 table 5000")
    
    h1.cmd("ip route add 10.0.3.1/24 dev h1-eth1 table 5001")
    h1.cmd("ip route add default via 10.0.3.2 dev h1-eth1 table 5001")
    h1.cmd("ip rule add from 10.0.1.1 table 5000")
    h1.cmd("ip rule add from 10.0.3.1 table 5001")
    h1.cmd("route add default gw 10.0.1.2  dev h1-eth0")
    
    h2.setIP('10.0.1.2', intf='h2-eth0')
    h2.setIP('10.0.2.1', intf='h2-eth1')
    h2.cmd("ifconfig h2-eth0 10.0.1.2/24")
    h2.cmd("ifconfig h2-eth1 10.0.2.1/24")
    h2.cmd("ip route add 10.0.2.0/24 via 10.0.2.2")
    h2.cmd("ip route add 10.0.1.0/24 via 10.0.1.1")
    h2.cmd("echo 1 > /proc/sys/net/ipv4/ip_forward")
    
    
    h4.setIP('10.0.3.2', intf='h4-eth0')
    h4.setIP('10.0.4.1', intf='h4-eth1')
    h4.cmd("ifconfig h4-eth0 10.0.3.2/24")
    h4.cmd("ifconfig h4-eth1 10.0.4.1/24")
    h4.cmd("ip route add 10.0.4.0  dev h4-eth1") #via 10.0.4.2
    h4.cmd("ip route add 10.0.3.0 via 10.0.3.1")
    
    h4.cmd("echo 1 > /proc/sys/net/ipv4/ip_forward")
    
    
    h3.setIP('10.0.2.2', intf='h3-eth0')
    h3.cmd("ifconfig h3-eth0 10.0.2.2 netmask 255.255.255.0")
    h3.setIP('10.0.4.2', intf='h3-eth1')
    h3.cmd("ifconfig h3-eth1 10.0.4.2 netmask 255.255.255.0")
    
    h3.cmd("ip route flush all proto static scope global")
    h3.cmd("ip route add 10.0.2.2/24 dev h3-eth0 table 5000")
    h3.cmd("ip route add default via 10.0.2.1 dev h3-eth0 table 5000")
    
    h3.cmd("ip route add 10.0.4.2/24 dev h3-eth1 table 5001")
    h3.cmd("ip route add default via 10.0.4.1 dev h3-eth1 table 5001")
    h3.cmd("ip rule add from 10.0.2.2 table 5000")
    h3.cmd("ip rule add from 10.0.4.2 table 5001")
    
    net.start()
    time.sleep(1)
    CLI(net)
    net.stop()
    

    这里写图片描述
    [1]Configure gateway for two NICs through static routeing

    3 hosts and 4 routers

    #!/usr/bin/python
    from mininet.topo import Topo
    from mininet.net import Mininet
    from mininet.cli import CLI
    import time
    #   0   1   2   3   4
    #h1--r1--r2--r3--r4--h2
    #             \5
    #              h3
    # 10.0.X.0
    def rp_disable(host):
        ifaces = host.cmd('ls /proc/sys/net/ipv4/conf')
        ifacelist = ifaces.split()    # default is to split on whitespace
        for iface in ifacelist:
           if iface != 'lo': host.cmd('sysctl net.ipv4.conf.' + iface + '.rp_filter=0')
    
    net = Mininet( cleanup=True )
    h1 = net.addHost('h1',ip='10.0.0.1')
    r1 = net.addHost('r1',ip='10.0.0.2')
    r2 = net.addHost('r2',ip='10.0.1.2')
    r3 = net.addHost('r3',ip='10.0.2.2')
    r4 = net.addHost('r4',ip='10.0.3.2')
    h2 = net.addHost('h2',ip='10.0.4.2')
    h3 = net.addHost('h3',ip='10.0.5.2')
    c0 = net.addController( 'c0' )
    
    net.addLink(h1,r1,intfName1='h1-eth0',intfName2='r1-eth0')
    net.addLink(r1,r2,intfName1='r1-eth1',intfName2='r2-eth0')
    net.addLink(r2,r3,intfName1='r2-eth1',intfName2='r3-eth0')
    net.addLink(r3,r4,intfName1='r3-eth1',intfName2='r4-eth0')
    net.addLink(r4,h2,intfName1='r4-eth1',intfName2='h2-eth0')
    net.addLink(r3,h3,intfName1='r3-eth2',intfName2='h3-eth0')
    net.build()
    
    h1.setIP('10.0.0.1', intf='h1-eth0')
    h1.cmd("ifconfig h1-eth0 10.0.0.1 netmask 255.255.255.0")
    h1.cmd("route add default gw 10.0.0.2")
    
    h3.cmd("ifconfig h3-eth0 10.0.5.2/24")
    h3.cmd("route add default gw 10.0.5.1")
    
    r1.cmd("ifconfig r1-eth0 10.0.0.2/24")
    r1.cmd("ifconfig r1-eth1 10.0.1.1/24")
    r1.cmd("ip route add to 10.0.4.0/24 via 10.0.1.2")
    r1.cmd("ip route add to 10.0.0.0/24 via 10.0.0.1")
    r1.cmd("ip route add to 10.0.5.0/24 via 10.0.1.2")
    r1.cmd('sysctl net.ipv4.ip_forward=1')
    #rp_disable(r1)
    
    r2.cmd("ifconfig r2-eth0 10.0.1.2/24")
    r2.cmd("ifconfig r2-eth1 10.0.2.1/24")
    r2.cmd("ip route add to 10.0.4.0/24 via 10.0.2.2")
    r2.cmd("ip route add to 10.0.0.0/24 via 10.0.1.1")
    r2.cmd("ip route add to 10.0.5.0/24 via 10.0.2.2")
    r2.cmd('sysctl net.ipv4.ip_forward=1')
    #rp_disable(r2)
    
    r3.cmd("ifconfig r3-eth0 10.0.2.2/24")
    r3.cmd("ifconfig r3-eth1 10.0.3.1/24")
    r3.cmd("ifconfig r3-eth2 10.0.5.1/24")
    r3.cmd("ip route add to 10.0.4.0/24 via 10.0.3.2")
    r3.cmd("ip route add to 10.0.0.0/24 via 10.0.2.1")
    r3.cmd("ip route add to 10.0.5.0/24 via 10.0.5.2")
    r3.cmd('sysctl net.ipv4.ip_forward=1')
    #rp_disable(r3)
    
    r4.cmd("ifconfig r4-eth0 10.0.3.2/24")
    r4.cmd("ifconfig r4-eth1 10.0.4.1/24")
    r4.cmd("ip route add to 10.0.4.0/24 via 10.0.4.2")
    r4.cmd("ip route add to 10.0.0.0/24 via 10.0.3.1")
    r4.cmd("ip route add to 10.0.5.0/24 via 10.0.3.1")
    r4.cmd('sysctl net.ipv4.ip_forward=1')
    #rp_disable(r4)
    
    h2.cmd("ifconfig h2-eth0 10.0.4.2/24")
    h2.cmd("route add default gw 10.0.4.1")
    
    net.start()
    time.sleep(1)
    CLI(net)
    net.stop()
    

    ##mininet multi interface topology

    #!/usr/bin/python
    from mininet.topo import Topo
    from mininet.net import Mininet
    from mininet.cli import CLI
    from mininet.link import TCLink
    import time
    #    ___r1____
    #   /          \0  1
    # h1            r3---h2
    #  \           /2
    #   ---r2-----
    max_queue_size = 20  
    net = Mininet( cleanup=True )
    h1 = net.addHost('h1',ip='10.0.1.1')
    r1 = net.addHost('r1',ip='10.0.1.2')
    r2 = net.addHost('r2',ip='10.0.3.2')
    r3 = net.addHost('r3',ip='10.0.5.1')
    h2 = net.addHost('h2',ip='10.0.5.2')
    c0 = net.addController('c0')
    net.addLink(h1,r1,intfName1='h1-eth0',intfName2='r1-eth0',cls=TCLink , bw=2, delay='20ms', max_queue_size=max_queue_size)
    net.addLink(r1,r3,intfName1='r1-eth1',intfName2='r3-eth0',cls=TCLink , bw=2, delay='20ms', max_queue_size=max_queue_size)
    net.addLink(r3,h2,intfName1='r3-eth1',intfName2='h2-eth0',cls=TCLink , bw=10, delay='10ms', max_queue_size=max_queue_size)
    net.addLink(h1,r2,intfName1='h1-eth1',intfName2='r2-eth0',cls=TCLink , bw=2, delay='20ms', max_queue_size=max_queue_size)
    net.addLink(r2,r3,intfName1='r2-eth1',intfName2='r3-eth2',cls=TCLink , bw=2, delay='20ms', max_queue_size=max_queue_size)
    
    net.build()
    
    h1.cmd("ifconfig h1-eth0 10.0.1.1/24")
    h1.cmd("ifconfig h1-eth1 10.0.3.1/24")
    h1.cmd("ip route flush all proto static scope global")
    h1.cmd("ip route add 10.0.1.1/24 dev h1-eth0 table 5000")
    h1.cmd("ip route add default via 10.0.1.2 dev h1-eth0 table 5000")
    
    h1.cmd("ip route add 10.0.3.1/24 dev h1-eth1 table 5001")
    h1.cmd("ip route add default via 10.0.3.2 dev h1-eth1 table 5001")
    h1.cmd("ip rule add from 10.0.1.1 table 5000")
    h1.cmd("ip rule add from 10.0.3.1 table 5001")
    h1.cmd("ip route add default gw 10.0.1.2  dev h1-eth0")
    #that be a must or else a tcp client would not know how to route packet out
    #h1.cmd("route add default gw 10.0.1.2  dev h1-eth0") would not work for the second part when a tcp client bind a address
    
    
    r1.cmd("ifconfig r1-eth0 10.0.1.2/24")
    r1.cmd("ifconfig r1-eth1 10.0.2.1/24")
    r1.cmd("ip route add to 10.0.1.0/24 via 10.0.1.1")
    r1.cmd("ip route add to 10.0.2.0/24 via 10.0.2.2")
    r1.cmd("ip route add to 10.0.5.0/24 via 10.0.2.2")
    r1.cmd('sysctl net.ipv4.ip_forward=1')
    
    r3.cmd("ifconfig r3-eth0 10.0.2.2/24")
    r3.cmd("ifconfig r3-eth1 10.0.5.1/24")
    r3.cmd("ifconfig r3-eth2 10.0.4.2/24")
    r3.cmd("ip route add to 10.0.1.0/24 via 10.0.2.1")
    r3.cmd("ip route add to 10.0.2.0/24 via 10.0.2.1")
    r3.cmd("ip route add to 10.0.5.0/24 via 10.0.5.2")
    r3.cmd("ip route add to 10.0.4.0/24 via 10.0.4.1")
    r3.cmd("ip route add to 10.0.3.0/24 via 10.0.4.1")
    r3.cmd('sysctl net.ipv4.ip_forward=1')
    
    r2.cmd("ifconfig r2-eth0 10.0.3.2/24")
    r2.cmd("ifconfig r2-eth1 10.0.4.1/24")
    r2.cmd("ip route add to 10.0.3.0/24 via 10.0.3.1")
    r2.cmd("ip route add to 10.0.4.0/24 via 10.0.4.2")
    r2.cmd("ip route add to 10.0.5.0/24 via 10.0.4.2")
    r3.cmd('sysctl net.ipv4.ip_forward=1')
    
    h2.cmd("ifconfig h2-eth0 10.0.5.2/24")
    h2.cmd("route add default gw 10.0.5.1")
    
    net.start()
    time.sleep(1)
    CLI(net)
    net.stop()
    

     h1.cmd(“route add default gw 10.0.1.2 dev h1-eth0”),this can make ping 10.0.5.2 works ok. If this line is commented out, ping 10.0.5.2 will not work but “ping -I 10.0.1.1 10.0.5.2” works normally.
    ##ring topology
    r4 routes packet from 10.0.1.1 to r5 and packets with source ip 10.0.3.1 to r6. This topology was created for multipath protocol test.
    ringTopology.py

    #!/usr/bin/python
    from mininet.topo import Topo
    from mininet.net import Mininet
    from mininet.link import TCLink
    from mininet.cli import CLI
    import time
    import subprocess
    import os,signal
    import sys
    #                  5.0   6.0     7.0
    #    ___r1____           ____r5____
    #   /          \0 1   0 /1         \  10.0
    # h1            r3-----r4            r7---h2
    #  \           /2       \____r6____/
    #   ---r2-----           8.0    9.0
    max_queue_size = 200  
    net = Mininet( cleanup=True )
    h1 = net.addHost('h1',ip='10.0.1.1')
    r1 = net.addHost('r1',ip='10.0.1.2')
    r2 = net.addHost('r2',ip='10.0.3.2')
    r3 = net.addHost('r3',ip='10.0.5.1')
    
    r4 = net.addHost('r4',ip='10.0.5.2')
    r5 = net.addHost('r5',ip='10.0.6.2')
    r6 = net.addHost('r6',ip='10.0.8.2')
    r7 = net.addHost('r7',ip='10.0.10.1')
    h2 = net.addHost('h2',ip='10.0.10.2')
    c0 = net.addController('c0')
    net.addLink(h1,r1,intfName1='h1-eth0',intfName2='r1-eth0',cls=TCLink , bw=500, delay='10ms', max_queue_size=10*max_queue_size)
    net.addLink(r1,r3,intfName1='r1-eth1',intfName2='r3-eth0',cls=TCLink , bw=500, delay='10ms', max_queue_size=10*max_queue_size)
    net.addLink(h1,r2,intfName1='h1-eth1',intfName2='r2-eth0',cls=TCLink , bw=500, delay='10ms', max_queue_size=10*max_queue_size)
    net.addLink(r2,r3,intfName1='r2-eth1',intfName2='r3-eth2',cls=TCLink , bw=500, delay='10ms', max_queue_size=10*max_queue_size)
    net.addLink(r3,r4,intfName1='r3-eth1',intfName2='r4-eth0',cls=TCLink , bw=100, delay='10ms', max_queue_size=max_queue_size)
    net.addLink(r4,r5,intfName1='r4-eth1',intfName2='r5-eth0',cls=TCLink , bw=100, delay='10ms', max_queue_size=10*max_queue_size)
    net.addLink(r5,r7,intfName1='r5-eth1',intfName2='r7-eth0',cls=TCLink , bw=100, delay='10ms', max_queue_size=10*max_queue_size)
    net.addLink(r7,h2,intfName1='r7-eth1',intfName2='h2-eth0',cls=TCLink , bw=100, delay='10ms', max_queue_size=10*max_queue_size)
    net.addLink(r4,r6,intfName1='r4-eth2',intfName2='r6-eth0',cls=TCLink , bw=100, delay='10ms', max_queue_size=10*max_queue_size)
    net.addLink(r6,r7,intfName1='r6-eth1',intfName2='r7-eth2',cls=TCLink , bw=100, delay='10ms', max_queue_size=10*max_queue_size)
    net.build()
    
    h1.cmd("ifconfig h1-eth0 10.0.1.1/24")
    h1.cmd("ifconfig h1-eth1 10.0.3.1/24")
    h1.cmd("ip route flush all proto static scope global")
    h1.cmd("ip route add 10.0.1.1/24 dev h1-eth0 table 5000")
    h1.cmd("ip route add default via 10.0.1.2 dev h1-eth0 table 5000")
    
    h1.cmd("ip route add 10.0.3.1/24 dev h1-eth1 table 5001")
    h1.cmd("ip route add default via 10.0.3.2 dev h1-eth1 table 5001")
    h1.cmd("ip rule add from 10.0.1.1 table 5000")
    h1.cmd("ip rule add from 10.0.3.1 table 5001")
    h1.cmd("ip route add default gw 10.0.1.2  dev h1-eth0")
    #that be a must or else a tcp client would not know how to route packet out
    h1.cmd("route add default gw 10.0.1.2  dev h1-eth0") #would not work for the second part when a tcp client bind a address
    
    
    r1.cmd("ifconfig r1-eth0 10.0.1.2/24")
    r1.cmd("ifconfig r1-eth1 10.0.2.1/24")
    r1.cmd("ip route add to 10.0.1.0/24 via 10.0.1.1")
    r1.cmd("ip route add to 10.0.2.0/24 via 10.0.2.2")
    r1.cmd("ip route add to 10.0.5.0/24 via 10.0.2.2")
    r1.cmd("ip route add to 10.0.6.0/24 via 10.0.2.2")
    r1.cmd("ip route add to 10.0.7.0/24 via 10.0.2.2")
    r1.cmd("ip route add to 10.0.8.0/24 via 10.0.2.2")
    r1.cmd("ip route add to 10.0.9.0/24 via 10.0.2.2")
    r1.cmd("ip route add to 10.0.10.0/24 via 10.0.2.2")
    r1.cmd('sysctl net.ipv4.ip_forward=1')
    
    r3.cmd("ifconfig r3-eth0 10.0.2.2/24")
    r3.cmd("ifconfig r3-eth1 10.0.5.1/24")
    r3.cmd("ifconfig r3-eth2 10.0.4.2/24")
    r3.cmd("ip route add to 10.0.1.0/24 via 10.0.2.1")
    r3.cmd("ip route add to 10.0.2.0/24 via 10.0.2.1")
    r3.cmd("ip route add to 10.0.5.0/24 via 10.0.5.2")
    r3.cmd("ip route add to 10.0.4.0/24 via 10.0.4.1")
    r3.cmd("ip route add to 10.0.3.0/24 via 10.0.4.1")
    r3.cmd("ip route add to 10.0.6.0/24 via 10.0.5.2")
    r3.cmd("ip route add to 10.0.7.0/24 via 10.0.5.2")
    r3.cmd("ip route add to 10.0.8.0/24 via 10.0.5.2")
    r3.cmd("ip route add to 10.0.9.0/24 via 10.0.5.2")
    r3.cmd("ip route add to 10.0.10.0/24 via 10.0.5.2")
    r3.cmd('sysctl net.ipv4.ip_forward=1')
    
    r2.cmd("ifconfig r2-eth0 10.0.3.2/24")
    r2.cmd("ifconfig r2-eth1 10.0.4.1/24")
    r2.cmd("ip route add to 10.0.3.0/24 via 10.0.3.1")
    r2.cmd("ip route add to 10.0.4.0/24 via 10.0.4.2")
    r2.cmd("ip route add to 10.0.5.0/24 via 10.0.4.2")
    r2.cmd("ip route add to 10.0.6.0/24 via 10.0.4.2")
    r2.cmd("ip route add to 10.0.7.0/24 via 10.0.4.2")
    r2.cmd("ip route add to 10.0.8.0/24 via 10.0.4.2")
    r2.cmd("ip route add to 10.0.9.0/24 via 10.0.4.2")
    r2.cmd("ip route add to 10.0.10.0/24 via 10.0.4.2")
    r2.cmd('sysctl net.ipv4.ip_forward=1')
    
    r4.cmd("ifconfig r4-eth0 10.0.5.2/24")
    r4.cmd("ifconfig r4-eth1 10.0.6.1/24")
    r4.cmd("ifconfig r4-eth2 10.0.8.1/24")
    r4.cmd("ip route add to 10.0.1.0/24 via 10.0.5.1")
    r4.cmd("ip route add to 10.0.2.0/24 via 10.0.5.1")
    r4.cmd("ip route add to 10.0.3.0/24 via 10.0.5.1")
    r4.cmd("ip route add to 10.0.4.0/24 via 10.0.5.1")
    r4.cmd("ip route add to 10.0.5.0/24 via 10.0.5.1")
    r4.cmd("ip route add to 10.0.6.0/24 via 10.0.6.2")
    r4.cmd("ip route add to 10.0.7.0/24 via 10.0.6.2")
    r4.cmd("ip route add to 10.0.8.0/24 via 10.0.8.2")
    r4.cmd("ip route add to 10.0.9.0/24 via 10.0.8.2")
    
    r4.cmd("ip route add 10.0.6.1/24 dev r4-eth1 table 5000")
    r4.cmd("ip route add default via 10.0.6.2 dev r4-eth1 table 5000")
    r4.cmd("ip route add 10.0.8.1/24 dev r4-eth2 table 5001")
    r4.cmd("ip route add default via 10.0.8.2 dev r4-eth2 table 5001")
    r4.cmd("ip rule add from 10.0.1.1 table 5000")
    r4.cmd("ip rule add from 10.0.3.1 table 5001")
    r4.cmd('sysctl net.ipv4.ip_forward=1')
    
    r5.cmd("ifconfig r5-eth0 10.0.6.2/24")
    r5.cmd("ifconfig r5-eth1 10.0.7.1/24")
    r5.cmd("ip route add to 10.0.1.0/24 via 10.0.6.1")
    r5.cmd("ip route add to 10.0.2.0/24 via 10.0.6.1")
    r5.cmd("ip route add to 10.0.3.0/24 via 10.0.6.1")
    r5.cmd("ip route add to 10.0.4.0/24 via 10.0.6.1")
    r5.cmd("ip route add to 10.0.5.0/24 via 10.0.6.1")
    r5.cmd("ip route add to 10.0.6.0/24 via 10.0.6.1")
    r5.cmd("ip route add to 10.0.7.0/24 via 10.0.7.2")
    r5.cmd("ip route add to 10.0.10.0/24 via 10.0.7.2")
    r5.cmd('sysctl net.ipv4.ip_forward=1')
    
    r6.cmd("ifconfig r6-eth0 10.0.8.2/24")
    r6.cmd("ifconfig r6-eth1 10.0.9.1/24")
    r6.cmd("ip route add to 10.0.1.0/24 via 10.0.8.1")
    r6.cmd("ip route add to 10.0.2.0/24 via 10.0.8.1")
    r6.cmd("ip route add to 10.0.3.0/24 via 10.0.8.1")
    r6.cmd("ip route add to 10.0.4.0/24 via 10.0.8.1")
    r6.cmd("ip route add to 10.0.5.0/24 via 10.0.8.1")
    r6.cmd("ip route add to 10.0.8.0/24 via 10.0.8.1")
    r6.cmd("ip route add to 10.0.9.0/24 via 10.0.9.2")
    r6.cmd("ip route add to 10.0.10.0/24 via 10.0.9.2")
    r6.cmd('sysctl net.ipv4.ip_forward=1')
    
    r7.cmd("ifconfig r7-eth0 10.0.7.2/24")
    r7.cmd("ifconfig r7-eth1 10.0.10.1/24")
    r7.cmd("ifconfig r7-eth2 10.0.9.2/24")
    r7.cmd('sysctl net.ipv4.ip_forward=1')
    r7.cmd("ip route add to 10.0.1.0/24 via 10.0.7.1")
    r7.cmd("ip route add to 10.0.2.0/24 via 10.0.7.1")
    r7.cmd("ip route add to 10.0.5.0/24 via 10.0.7.1")
    r7.cmd("ip route add to 10.0.6.0/24 via 10.0.7.1")
    r7.cmd("ip route add to 10.0.7.0/24 via 10.0.7.1")
    
    r7.cmd("ip route add to 10.0.3.0/24 via 10.0.9.1")
    r7.cmd("ip route add to 10.0.4.0/24 via 10.0.9.1")
    r7.cmd("ip route add to 10.0.8.0/24 via 10.0.9.1")
    r7.cmd("ip route add to 10.0.9.0/24 via 10.0.9.1")
    r7.cmd("ip route add to 10.0.10.0/24 via 10.0.10.2")
    r7.cmd('sysctl net.ipv4.ip_forward=1')
    
    h2.cmd("ifconfig h2-eth0 10.0.10.2/24")
    h2.cmd("route add default gw 10.0.10.1")
    
    net.start()
    
    time.sleep(1)
    CLI(net)
    net.stop()
    

    dumbbell

    # CMU 18731 HW2
    # Code referenced from:git@bitbucket.org:huangty/cs144_bufferbloat.git
    # Edited by: Deepti Sunder Prakash
    # https://github.com/dhruvityagi/netsec/blob/master/dumbbell.py
    #!/usr/bin/python
    
    from mininet.topo import Topo
    from mininet.node import CPULimitedHost
    from mininet.link import TCLink
    from mininet.net import Mininet
    from mininet.log import lg, info
    from mininet.util import dumpNodeConnections
    from mininet.cli import CLI
    
    from subprocess import Popen, PIPE
    from time import sleep, time
    from multiprocessing import Process
    from argparse import ArgumentParser
    import time
    import sys
    import os
    
    
    class DumbbellTopo(Topo):
        "Dumbbell topology for Shrew experiment"
        def build(self, n=6, bw_net=100, delay='20ms', bw_host=10):
            #TODO:Add your code to create the topology.
            #Add 2 switches
            s1 = self.addSwitch('s1')
            s2 = self.addSwitch('s2')
            self.addLink(s1, s2,bw=bw_net, delay=delay)
    
            #Left Side
            a1 = self.addHost('a1')
            hl1 = self.addHost('hl1')
            hl2 = self.addHost('hl2')
            # 10 Mbps, 20ms delay
            self.addLink(hl1, s1, bw=bw_host, delay=delay)
            self.addLink(hl2, s1, bw=bw_host, delay=delay)
            self.addLink(a1, s1, bw=bw_host, delay=delay)
            
            #Right Side
            a2 = self.addHost('a2')
            hr1 = self.addHost('hr1')
            hr2 = self.addHost('hr2')
            # 10 Mbps, 20ms delay
            self.addLink(hr1, s2, bw=bw_host, delay=delay)
            self.addLink(hr2, s2, bw=bw_host, delay=delay)
            self.addLink(a2, s2, bw=bw_host, delay=delay)
    def myiperf(net, **kwargs):
        """
        Command to start a transfer between src and dst.
        :param kwargs: named arguments
            src: name of the source node.
            dst: name of the destination node.
            protocol: tcp or udp (default tcp).
            duration: duration of the transfert in seconds (default 10s).
            bw: for udp, bandwidth to send at in bits/sec (default 1 Mbit/sec)
        """
        kwargs.setdefault('protocol', 'TCP')
        kwargs.setdefault('duration', 5)
        kwargs.setdefault('bw', 100000)
        info('***iperf event at t={time}: {args}\n'.format(time=time.time(), args=kwargs))
        
        if not os.path.exists("output"):
            os.makedirs("output")
        server_output = "output/iperf-{protocol}-server-{src}-{dst}.txt".format(**kwargs)
        client_output = "output/iperf-{protocol}-client-{src}-{dst}.txt".format(**kwargs)
        
        client, server = net.get(kwargs['src'], kwargs['dst'])
        iperf_server_cmd = ''
        iperf_client_cmd = ''
        if kwargs['protocol'].upper() == 'UDP':
             iperf_server_cmd = 'iperf -u -s -i 1'
             iperf_client_cmd = 'iperf -u -t {duration} -c {server_ip} -b {bw}'.format(server_ip=server.IP(), **kwargs)
    
    
        elif kwargs['protocol'].upper() == 'TCP':
             iperf_server_cmd = 'iperf -s -i 1'
             iperf_client_cmd = 'iperf -t {duration} -c {server_ip}'.format(server_ip=server.IP(), **kwargs)
        else :
            raise Exception( 'Unexpected protocol:{protocol}'.format(**kwargs))
    
        server.sendCmd('{cmd} &>{output} &'.format(cmd=iperf_server_cmd, output=server_output))
        info('iperf server command: {cmd} -s -i 1 &>{output} &\n'.format(cmd=iperf_server_cmd,
                                                                                output=server_output))
        # This is a patch to allow sendingCmd while iperf is running in background.CONS: we can not know when
        # iperf finishes and get their output
        server.waiting = False
    
        if kwargs['protocol'].lower() == 'tcp':
            while 'Connected' not in client.cmd(
                            'sh -c "echo A | telnet -e A %s 5001"' % server.IP()):
                info('Waiting for iperf to start up...\n')
                sleep(.5)
    
        info('iperf client command: {cmd} &>{output} &\n'.format(
        cmd = iperf_client_cmd, output=client_output))
        client.sendCmd('{cmd} &>{output} &'.format(
        cmd = iperf_client_cmd, output=client_output))
        # This is a patch to allow sendingCmd while iperf is running in background.CONS: we can not know when
        # iperf finishes and get their output
        client.waiting = False
    def bbnet():
        "Create network and run shrew  experiment"
        print "starting mininet ...."
        topo = DumbbellTopo()
        net = Mininet(topo=topo, host=CPULimitedHost, link=TCLink,
                  autoPinCpus=True)
        net.start()
        #dumpNodeConnections(net.hosts)
    
        #TODO:Add your code to test reachability of hosts.
        #print "Testing network connectivity"
        #net.pingAll()
        
        #TODO:Add your code to start long lived TCP flows between
        #hosts on the left and right.
        #print "Starting long lived tcp connection between hl1 and hr1, hl2 and hr2"
        #hl1, hl2, hr1, hr2, a1 = net.get('hl1','hl2','hr1','hr2', 'a1')
        #hl1_IP = hl1.IP()
        #hr1_IP = hl2.IP()
        #a1_IP = a1.IP()
        #print a1_IP,hl1_IP, hr1_IP
    ## the way to use kwargs
    ## https://blog.csdn.net/u010852680/article/details/77848570
        kwargs={'src':'hl1','dst':'hr1'}
        myiperf(net,**kwargs)
        time.sleep(10)
        #CLI(net)
        
        net.stop()
    
    if __name__ == '__main__':
        bbnet()
    
    

    topology zoo

    https://github.com/cotyb/LISA
    https://github.com/fnss/fnss/tree/master/fnss/topologies

    展开全文
  • Mininet教程(一):Mininet基本介绍

    万次阅读 多人点赞 2020-03-31 19:56:35
    一、Mininet是什么 Mininet是由斯坦福大学基于Linux Container架构开发的一个进程虚拟化网络仿真工具,可以创建一个包含主机,交换机,控制器和链路的虚拟网络,其交换机支持OpenFlow,具备高度灵活的自定义软件定义...
  • Mininet Docker映像 该文件包含用于构建可以执行mininet模拟网络的mininet docker映像的文件。 特权模式 在特权模式( --privileged )下运行此容器很重要,以便可以操纵网络接口属性和设备。 我怀疑这也可以通过--...
  • mininet资源

    2014-04-18 11:01:26
    在本系列前两篇文章里 我们分别介绍了软件定义网络的两大利器 OpenFlow和OpenvSwitch 不少研究者可能很有兴趣尝试一下 却拿不出太多的时间 本文将介绍一套强大的轻量级网络研究平台 mininet 通过它相信大家可以很好...
  • Mininet流量发生器 这是运行Mininet的自定义python脚本。 该脚本具有在主机之间生成随机流的功能,供控制器检测和分类。 入门 这些说明将帮助您将Mininet Flow Generator安装和设置到本地计算机中,以进行开发和测试...
  • Mininet安装

    2020-12-28 13:31:01
    由于新创建一个ubuntu14.04的系统,需要重新安装mininet搭建实验环境,综合网上的教程,觉得如下方法最简单 首先通过git命令将整个mininet包down下来 sudo git clone http://github.com/mininet/mininet 如果提示...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 3,903
精华内容 1,561
关键字:

Mininet