I decided to write virtio frontend drivers for FreeBSD because I wanted to understand how virtio works.
My virtio-net drivers began to work in the middle of October, so I'm guessing that I would offer the driver for FreeBSD community.
Actually there are some tasks to do before release it, so I need some time (and some help:).
Current Status (virtio-net)
- virtio-net driver is working on a 1vCPU virtual machine. Tested more than 300GB of TX and RX.
- almost same performance with e1000 (check out the graph below)
- no offload functions provided (I think it is not so needed now)
Development Environment
- Host: Fedora 14 alpha x86_64
- Guest: FreeBSD 8.1-RELEASE, amd64
Least Required Tasks to offer this
- non-indirect table mode support (needs test and debug)
- split virtio PCI device framework and virtio-net driver
- SMP support
If you're interested in development of virtio drivers in FreeBSD, please
contact me. I'm not a good kernel-hacker (actually this is my first
work for kernel mode program and also my first C work), so any help are
welcome.
vm162# ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
ether 52:54:00:1f:61:0d
inet 192.168.44.162 netmask 0xfffffe00 broadcast 192.168.45.255
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=3<RXCSUM,TXCSUM>
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
inet6 ::1 prefixlen 128
inet 127.0.0.1 netmask 0xff000000
nd6 options=3<PERFORMNUD,ACCEPT_RTADV>
vm162# make load
/sbin/kldload -v /usr/src/sys/dev/xen/virtio/virtio_pci.ko
virtio_pci0: <virtio-net Virtual Network Interface> port 0xcb00-0xcb1f mem 0xf2054000-0xf2054fff irq 11 at device 8.0 on pci0
virtio_pci0: assigning PCI resouces: <1>mem <0>ioport <0>irq callback
virtio_pci0: [GIANT-LOCKED]
virtio_pci0: [ITHREAD]
vn0: Ethernet address: 52:54:00:1f:61:0e
Loaded /usr/src/sys/dev/xen/virtio/virtio_pci.ko, id=2
vm162# ifconfig vn0 up
vm162# ifconfig vn0 192.168.90.1 255.255.255.0
vm162# ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
ether 52:54:00:1f:61:0d
inet 192.168.44.162 netmask 0xfffffe00 broadcast 192.168.45.255
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=3<RXCSUM,TXCSUM>
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
inet6 ::1 prefixlen 128
inet 127.0.0.1 netmask 0xff000000
nd6 options=3<PERFORMNUD,ACCEPT_RTADV>
vn0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
ether 52:54:00:1f:61:0e
inet 192.168.90.1 netmask 0xffffff00 broadcast 255.255.255.0
vm162# ping 192.168.90.2
PING 192.168.90.2 (192.168.90.2): 56 data bytes
64 bytes from 192.168.90.2: icmp_seq=0 ttl=64 time=3.696 ms
64 bytes from 192.168.90.2: icmp_seq=1 ttl=64 time=1.004 ms
64 bytes from 192.168.90.2: icmp_seq=2 ttl=64 time=1.051 ms
64 bytes from 192.168.90.2: icmp_seq=3 ttl=64 time=1.044 ms
64 bytes from 192.168.90.2: icmp_seq=4 ttl=64 time=1.218 ms
64 bytes from 192.168.90.2: icmp_seq=5 ttl=64 time=0.970 ms
^C
--- 192.168.90.2 ping statistics ---
6 packets transmitted, 6 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.970/1.497/3.696/0.986 ms
vm162# iperf -c 192.168.90.2
------------------------------------------------------------
Client connecting to 192.168.90.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.90.1 port 15185 connected with 192.168.90.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.7 sec 142 MBytes 112 Mbits/sec
vm162#
performance comparison between fxp(e1000) and vn(virtio)
0 件のコメント:
コメントを投稿