Merge pull request #499 from ijc25/vsock-v6-patches

Vsock v6 patches
This commit is contained in:
Ian Campbell 2016-09-14 12:13:20 +01:00 committed by GitHub
commit c0dc9ca877
46 changed files with 1070 additions and 554 deletions

View File

@ -1,7 +1,7 @@
From 4c251c111a65c8eef8e4dcf0b7326ef7761f6ab9 Mon Sep 17 00:00:00 2001
From 0d67af6648f600656eb20cb2ca1d35cb0985e9bd Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 17 Dec 2015 16:53:43 +0800
Subject: [PATCH 01/40] virtio: make find_vqs() checkpatch.pl-friendly
Subject: [PATCH 01/46] virtio: make find_vqs() checkpatch.pl-friendly
checkpatch.pl wants arrays of strings declared as follows:
@ -115,10 +115,10 @@ index 1b83159..bf2d130 100644
struct virtio_ccw_device *vcdev = to_vc_device(vdev);
unsigned long *indicatorp = NULL;
diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 7d3e5d0..0c3691f 100644
index 56f7e25..66082c9 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -388,7 +388,7 @@ static int init_vqs(struct virtio_balloon *vb)
@@ -394,7 +394,7 @@ static int init_vqs(struct virtio_balloon *vb)
{
struct virtqueue *vqs[3];
vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
@ -215,5 +215,5 @@ index e5ce8ab..6e6cb0c 100644
u64 (*get_features)(struct virtio_device *vdev);
int (*finalize_features)(struct virtio_device *vdev);
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 514c41f6df542ae7b259f5d6aefea4af508fac94 Mon Sep 17 00:00:00 2001
From f1e0be6f17679e7f532989bae3290bb4ed3ba773 Mon Sep 17 00:00:00 2001
From: Julia Lawall <julia.lawall@lip6.fr>
Date: Sat, 21 Nov 2015 18:39:17 +0100
Subject: [PATCH 02/40] VSOCK: constify vmci_transport_notify_ops structures
Subject: [PATCH 02/46] VSOCK: constify vmci_transport_notify_ops structures
The vmci_transport_notify_ops structures are never modified, so declare
them as const.
@ -73,5 +73,5 @@ index dc9c792..21e591d 100644
vmci_transport_notify_pkt_socket_destruct,
vmci_transport_notify_pkt_poll_in,
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From e91f4552f7a858fa44418e1996e21b3098683de4 Mon Sep 17 00:00:00 2001
From bc63a861a8379269f4a51fdaac3d40f9161aea4d Mon Sep 17 00:00:00 2001
From: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Date: Tue, 22 Mar 2016 17:05:52 +0100
Subject: [PATCH 03/40] AF_VSOCK: Shrink the area influenced by prepare_to_wait
Subject: [PATCH 03/46] AF_VSOCK: Shrink the area influenced by prepare_to_wait
When a thread is prepared for waiting by calling prepare_to_wait, sleeping
is not allowed until either the wait has taken place or finish_wait has
@ -332,5 +332,5 @@ index 9b5bd6d..b5f1221 100644
release_sock(sk);
return err;
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 4192f672fae559f32d82de72a677701853cc98a7 Mon Sep 17 00:00:00 2001
From c1bc13ebe28532f99cb6b8edaa57a6aa61adbe58 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 23 Jun 2016 16:28:58 +0100
Subject: [PATCH] vsock: make listener child lock ordering explicit
Subject: [PATCH 04/46] vsock: make listener child lock ordering explicit
There are several places where the listener and pending or accept queue
child sockets are accessed at the same time. Lockdep is unhappy that
@ -16,12 +16,13 @@ covered the vsock_pending_work() function.
Suggested-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 4192f672fae559f32d82de72a677701853cc98a7)
---
net/vmw_vsock/af_vsock.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index b5f1221..b96ac918 100644
index b5f1221..b96ac91 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -61,6 +61,14 @@
@ -57,3 +58,6 @@ index b5f1221..b96ac918 100644
vconnected = vsock_sk(connected);
/* If the listener socket has received an error, then we should
--
2.9.3

View File

@ -1,7 +1,7 @@
From c2ec5f1e2aa8784acadbaf9f625d8ca516c81c6b Mon Sep 17 00:00:00 2001
From 3351d8e1a0e52529aea4b5c3ce0c2ef3ad43ee17 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 17 Dec 2015 11:10:21 +0800
Subject: [PATCH 04/40] VSOCK: transport-specific vsock_transport functions
Date: Thu, 28 Jul 2016 15:36:30 +0100
Subject: [PATCH 05/46] VSOCK: transport-specific vsock_transport functions
struct vsock_transport contains function pointers called by AF_VSOCK
core code. The transport may want its own transport-specific function
@ -13,7 +13,8 @@ access transport-specific function pointers.
The virtio transport will use this.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 7740f7aafc9e6f415e8b6d5e8421deae63033b8d)
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 0b01aeb3d2fbf16787f0c9629f4ca52ae792f732)
---
include/net/af_vsock.h | 3 +++
net/vmw_vsock/af_vsock.c | 9 +++++++++
@ -34,10 +35,10 @@ index e9eb2d6..23f5525 100644
void vsock_release_pending(struct sock *pending);
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index b5f1221..15f9595 100644
index b96ac91..e34d96f 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -1987,6 +1987,15 @@ void vsock_core_exit(void)
@@ -1995,6 +1995,15 @@ void vsock_core_exit(void)
}
EXPORT_SYMBOL_GPL(vsock_core_exit);
@ -54,5 +55,5 @@ index b5f1221..15f9595 100644
MODULE_DESCRIPTION("VMware Virtual Socket Family");
MODULE_VERSION("1.0.1.0-k");
--
2.9.0
2.9.3

View File

@ -0,0 +1,83 @@
From 54f528a28ab97e1306180adde7ba15c4158428db Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 28 Jul 2016 15:36:31 +0100
Subject: [PATCH 06/46] VSOCK: defer sock removal to transports
The virtio transport will implement graceful shutdown and the related
SO_LINGER socket option. This requires orphaning the sock but keeping
it in the table of connections after .release().
This patch adds the vsock_remove_sock() function and leaves it up to the
transport when to remove the sock.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 6773b7dc39f165bd9d824b50ac52cbb3f87d53c8)
---
include/net/af_vsock.h | 1 +
net/vmw_vsock/af_vsock.c | 16 ++++++++++------
net/vmw_vsock/vmci_transport.c | 2 ++
3 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
index 23f5525..3af0b22 100644
--- a/include/net/af_vsock.h
+++ b/include/net/af_vsock.h
@@ -180,6 +180,7 @@ void vsock_remove_connected(struct vsock_sock *vsk);
struct sock *vsock_find_bound_socket(struct sockaddr_vm *addr);
struct sock *vsock_find_connected_socket(struct sockaddr_vm *src,
struct sockaddr_vm *dst);
+void vsock_remove_sock(struct vsock_sock *vsk);
void vsock_for_each_connected_socket(void (*fn)(struct sock *sk));
#endif /* __AF_VSOCK_H__ */
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index e34d96f..17dbbe6 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -344,6 +344,16 @@ static bool vsock_in_connected_table(struct vsock_sock *vsk)
return ret;
}
+void vsock_remove_sock(struct vsock_sock *vsk)
+{
+ if (vsock_in_bound_table(vsk))
+ vsock_remove_bound(vsk);
+
+ if (vsock_in_connected_table(vsk))
+ vsock_remove_connected(vsk);
+}
+EXPORT_SYMBOL_GPL(vsock_remove_sock);
+
void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
{
int i;
@@ -660,12 +670,6 @@ static void __vsock_release(struct sock *sk)
vsk = vsock_sk(sk);
pending = NULL; /* Compiler warning. */
- if (vsock_in_bound_table(vsk))
- vsock_remove_bound(vsk);
-
- if (vsock_in_connected_table(vsk))
- vsock_remove_connected(vsk);
-
transport->release(vsk);
lock_sock(sk);
diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
index 0a369bb..706991e 100644
--- a/net/vmw_vsock/vmci_transport.c
+++ b/net/vmw_vsock/vmci_transport.c
@@ -1644,6 +1644,8 @@ static void vmci_transport_destruct(struct vsock_sock *vsk)
static void vmci_transport_release(struct vsock_sock *vsk)
{
+ vsock_remove_sock(vsk);
+
if (!vmci_handle_is_invalid(vmci_trans(vsk)->dg_handle)) {
vmci_datagram_destroy_handle(vmci_trans(vsk)->dg_handle);
vmci_trans(vsk)->dg_handle = VMCI_INVALID_HANDLE;
--
2.9.3

View File

@ -1,31 +1,36 @@
From c3d222b1921fc5c9a6d10b2d2f2b0141fcc0741e Mon Sep 17 00:00:00 2001
From ec54f262b84f327f1c0ecabce6f0c9b5c75ff2df Mon Sep 17 00:00:00 2001
From: Asias He <asias@redhat.com>
Date: Thu, 13 Jun 2013 18:27:00 +0800
Subject: [PATCH 05/40] VSOCK: Introduce virtio_vsock_common.ko
Date: Thu, 28 Jul 2016 15:36:32 +0100
Subject: [PATCH 07/46] VSOCK: Introduce virtio_vsock_common.ko
This module contains the common code and header files for the following
virtio_transporto and vhost_vsock kernel modules.
Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 06a8fc78367d070720af960dcecec917d3ae5f3b)
---
MAINTAINERS | 10 +
include/linux/virtio_vsock.h | 167 ++++
.../trace/events/vsock_virtio_transport_common.h | 144 ++++
include/linux/virtio_vsock.h | 154 ++++
include/net/af_vsock.h | 2 +
.../trace/events/vsock_virtio_transport_common.h | 144 +++
include/uapi/linux/Kbuild | 1 +
include/uapi/linux/virtio_ids.h | 1 +
include/uapi/linux/virtio_vsock.h | 94 +++
net/vmw_vsock/virtio_transport_common.c | 838 +++++++++++++++++++++
6 files changed, 1254 insertions(+)
include/uapi/linux/virtio_vsock.h | 94 ++
net/vmw_vsock/virtio_transport_common.c | 992 +++++++++++++++++++++
8 files changed, 1398 insertions(+)
create mode 100644 include/linux/virtio_vsock.h
create mode 100644 include/trace/events/vsock_virtio_transport_common.h
create mode 100644 include/uapi/linux/virtio_vsock.h
create mode 100644 net/vmw_vsock/virtio_transport_common.c
diff --git a/MAINTAINERS b/MAINTAINERS
index ab65bbe..b93ba8b 100644
index 48bd523..3e60f59 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11382,6 +11382,16 @@ S: Maintained
@@ -11395,6 +11395,16 @@ S: Maintained
F: drivers/media/v4l2-core/videobuf2-*
F: include/media/videobuf2-*
@ -44,10 +49,10 @@ index ab65bbe..b93ba8b 100644
S: Maintained
diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
new file mode 100644
index 0000000..4c3d8e6
index 0000000..9638bfe
--- /dev/null
+++ b/include/linux/virtio_vsock.h
@@ -0,0 +1,167 @@
@@ -0,0 +1,154 @@
+#ifndef _LINUX_VIRTIO_VSOCK_H
+#define _LINUX_VIRTIO_VSOCK_H
+
@ -62,8 +67,6 @@ index 0000000..4c3d8e6
+#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4)
+#define VIRTIO_VSOCK_MAX_BUF_SIZE 0xFFFFFFFFUL
+#define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 64)
+#define VIRTIO_VSOCK_MAX_TX_BUF_SIZE (1024 * 1024 * 16)
+#define VIRTIO_VSOCK_MAX_DGRAM_SIZE (1024 * 64)
+
+enum {
+ VSOCK_VQ_RX = 0, /* for host to guest data */
@ -81,8 +84,8 @@ index 0000000..4c3d8e6
+ u32 buf_size_min;
+ u32 buf_size_max;
+
+ struct mutex tx_lock;
+ struct mutex rx_lock;
+ spinlock_t tx_lock;
+ spinlock_t rx_lock;
+
+ /* Protected by tx_lock */
+ u32 tx_cnt;
@ -103,6 +106,7 @@ index 0000000..4c3d8e6
+ void *buf;
+ u32 len;
+ u32 off;
+ bool reply;
+};
+
+struct virtio_vsock_pkt_info {
@ -112,29 +116,17 @@ index 0000000..4c3d8e6
+ u16 type;
+ u16 op;
+ u32 flags;
+ bool reply;
+};
+
+struct virtio_transport {
+ /* This must be the first field */
+ struct vsock_transport transport;
+
+ /* Send packet for a specific socket */
+ int (*send_pkt)(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt_info *info);
+
+ /* Send packet without a socket (e.g. RST). Prefer send_pkt() over
+ * send_pkt_no_sock() when a socket exists.
+ */
+ int (*send_pkt_no_sock)(struct virtio_vsock_pkt *pkt);
+ /* Takes ownership of the packet */
+ int (*send_pkt)(struct virtio_vsock_pkt *pkt);
+};
+
+struct virtio_vsock_pkt *
+virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info,
+ size_t len,
+ u32 src_cid,
+ u32 src_port,
+ u32 dst_cid,
+ u32 dst_port);
+ssize_t
+virtio_transport_stream_dequeue(struct vsock_sock *vsk,
+ struct msghdr *msg,
@ -215,6 +207,19 @@ index 0000000..4c3d8e6
+void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit);
+
+#endif /* _LINUX_VIRTIO_VSOCK_H */
diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
index 3af0b22..f275896 100644
--- a/include/net/af_vsock.h
+++ b/include/net/af_vsock.h
@@ -63,6 +63,8 @@ struct vsock_sock {
struct list_head accept_queue;
bool rejected;
struct delayed_work dwork;
+ struct delayed_work close_work;
+ bool close_work_scheduled;
u32 peer_shutdown;
bool sent_request;
bool ignore_connecting_rst;
diff --git a/include/trace/events/vsock_virtio_transport_common.h b/include/trace/events/vsock_virtio_transport_common.h
new file mode 100644
index 0000000..b7f1d62
@ -365,6 +370,18 @@ index 0000000..b7f1d62
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 32152e7..c830e9f 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -448,6 +448,7 @@ header-y += virtio_ring.h
header-y += virtio_rng.h
header-y += virtio_scsi.h
header-y += virtio_types.h
+header-y += virtio_vsock.h
header-y += vm_sockets.h
header-y += vt.h
header-y += wait.h
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 77925f5..3228d58 100644
--- a/include/uapi/linux/virtio_ids.h
@ -378,7 +395,7 @@ index 77925f5..3228d58 100644
#endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_vsock.h b/include/uapi/linux/virtio_vsock.h
new file mode 100644
index 0000000..12946ab
index 0000000..6b011c1
--- /dev/null
+++ b/include/uapi/linux/virtio_vsock.h
@@ -0,0 +1,94 @@
@ -423,8 +440,8 @@ index 0000000..12946ab
+#include <linux/virtio_config.h>
+
+struct virtio_vsock_config {
+ __le32 guest_cid;
+};
+ __le64 guest_cid;
+} __attribute__((packed));
+
+enum virtio_vsock_event_id {
+ VIRTIO_VSOCK_EVENT_TRANSPORT_RESET = 0,
@ -432,12 +449,12 @@ index 0000000..12946ab
+
+struct virtio_vsock_event {
+ __le32 id;
+};
+} __attribute__((packed));
+
+struct virtio_vsock_hdr {
+ __le32 src_cid;
+ __le64 src_cid;
+ __le64 dst_cid;
+ __le32 src_port;
+ __le32 dst_cid;
+ __le32 dst_port;
+ __le32 len;
+ __le16 type; /* enum virtio_vsock_type */
@ -445,7 +462,7 @@ index 0000000..12946ab
+ __le32 flags;
+ __le32 buf_alloc;
+ __le32 fwd_cnt;
+};
+} __attribute__((packed));
+
+enum virtio_vsock_type {
+ VIRTIO_VSOCK_TYPE_STREAM = 1,
@ -478,10 +495,10 @@ index 0000000..12946ab
+#endif /* _UAPI_LINUX_VIRTIO_VSOCK_H */
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
new file mode 100644
index 0000000..5b9e202
index 0000000..a53b3a1
--- /dev/null
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -0,0 +1,838 @@
@@ -0,0 +1,992 @@
+/*
+ * common code for virtio vsock
+ *
@ -491,6 +508,7 @@ index 0000000..5b9e202
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ */
+#include <linux/spinlock.h>
+#include <linux/module.h>
+#include <linux/ctype.h>
+#include <linux/list.h>
@ -505,6 +523,9 @@ index 0000000..5b9e202
+#define CREATE_TRACE_POINTS
+#include <trace/events/vsock_virtio_transport_common.h>
+
+/* How long to wait for graceful shutdown of a connection */
+#define VSOCK_CLOSE_TIMEOUT (8 * HZ)
+
+static const struct virtio_transport *virtio_transport_get_ops(void)
+{
+ const struct vsock_transport *t = vsock_core_get_transport();
@ -512,17 +533,6 @@ index 0000000..5b9e202
+ return container_of(t, struct virtio_transport, transport);
+}
+
+static int virtio_transport_send_pkt(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt_info *info)
+{
+ return virtio_transport_get_ops()->send_pkt(vsk, info);
+}
+
+static int virtio_transport_send_pkt_no_sock(struct virtio_vsock_pkt *pkt)
+{
+ return virtio_transport_get_ops()->send_pkt_no_sock(pkt);
+}
+
+struct virtio_vsock_pkt *
+virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info,
+ size_t len,
@ -540,13 +550,14 @@ index 0000000..5b9e202
+
+ pkt->hdr.type = cpu_to_le16(info->type);
+ pkt->hdr.op = cpu_to_le16(info->op);
+ pkt->hdr.src_cid = cpu_to_le32(src_cid);
+ pkt->hdr.src_cid = cpu_to_le64(src_cid);
+ pkt->hdr.dst_cid = cpu_to_le64(dst_cid);
+ pkt->hdr.src_port = cpu_to_le32(src_port);
+ pkt->hdr.dst_cid = cpu_to_le32(dst_cid);
+ pkt->hdr.dst_port = cpu_to_le32(dst_port);
+ pkt->hdr.flags = cpu_to_le32(info->flags);
+ pkt->len = len;
+ pkt->hdr.len = cpu_to_le32(len);
+ pkt->reply = info->reply;
+
+ if (info->msg && len > 0) {
+ pkt->buf = kmalloc(len, GFP_KERNEL);
@ -574,6 +585,50 @@ index 0000000..5b9e202
+}
+EXPORT_SYMBOL_GPL(virtio_transport_alloc_pkt);
+
+static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt_info *info)
+{
+ u32 src_cid, src_port, dst_cid, dst_port;
+ struct virtio_vsock_sock *vvs;
+ struct virtio_vsock_pkt *pkt;
+ u32 pkt_len = info->pkt_len;
+
+ src_cid = vm_sockets_get_local_cid();
+ src_port = vsk->local_addr.svm_port;
+ if (!info->remote_cid) {
+ dst_cid = vsk->remote_addr.svm_cid;
+ dst_port = vsk->remote_addr.svm_port;
+ } else {
+ dst_cid = info->remote_cid;
+ dst_port = info->remote_port;
+ }
+
+ vvs = vsk->trans;
+
+ /* we can send less than pkt_len bytes */
+ if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE)
+ pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
+
+ /* virtio_transport_get_credit might return less than pkt_len credit */
+ pkt_len = virtio_transport_get_credit(vvs, pkt_len);
+
+ /* Do not send zero length OP_RW pkt */
+ if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW)
+ return pkt_len;
+
+ pkt = virtio_transport_alloc_pkt(info, pkt_len,
+ src_cid, src_port,
+ dst_cid, dst_port);
+ if (!pkt) {
+ virtio_transport_put_credit(vvs, pkt_len);
+ return -ENOMEM;
+ }
+
+ virtio_transport_inc_tx_pkt(vvs, pkt);
+
+ return virtio_transport_get_ops()->send_pkt(pkt);
+}
+
+static void virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs,
+ struct virtio_vsock_pkt *pkt)
+{
@ -589,10 +644,10 @@ index 0000000..5b9e202
+
+void virtio_transport_inc_tx_pkt(struct virtio_vsock_sock *vvs, struct virtio_vsock_pkt *pkt)
+{
+ mutex_lock(&vvs->tx_lock);
+ spin_lock_bh(&vvs->tx_lock);
+ pkt->hdr.fwd_cnt = cpu_to_le32(vvs->fwd_cnt);
+ pkt->hdr.buf_alloc = cpu_to_le32(vvs->buf_alloc);
+ mutex_unlock(&vvs->tx_lock);
+ spin_unlock_bh(&vvs->tx_lock);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_inc_tx_pkt);
+
@ -600,12 +655,12 @@ index 0000000..5b9e202
+{
+ u32 ret;
+
+ mutex_lock(&vvs->tx_lock);
+ spin_lock_bh(&vvs->tx_lock);
+ ret = vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt);
+ if (ret > credit)
+ ret = credit;
+ vvs->tx_cnt += ret;
+ mutex_unlock(&vvs->tx_lock);
+ spin_unlock_bh(&vvs->tx_lock);
+
+ return ret;
+}
@ -613,9 +668,9 @@ index 0000000..5b9e202
+
+void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit)
+{
+ mutex_lock(&vvs->tx_lock);
+ spin_lock_bh(&vvs->tx_lock);
+ vvs->tx_cnt -= credit;
+ mutex_unlock(&vvs->tx_lock);
+ spin_unlock_bh(&vvs->tx_lock);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_put_credit);
+
@ -628,7 +683,7 @@ index 0000000..5b9e202
+ .type = type,
+ };
+
+ return virtio_transport_send_pkt(vsk, &info);
+ return virtio_transport_send_pkt_info(vsk, &info);
+}
+
+static ssize_t
@ -641,10 +696,8 @@ index 0000000..5b9e202
+ size_t bytes, total = 0;
+ int err = -EFAULT;
+
+ mutex_lock(&vvs->rx_lock);
+ while (total < len &&
+ vvs->rx_bytes > 0 &&
+ !list_empty(&vvs->rx_queue)) {
+ spin_lock_bh(&vvs->rx_lock);
+ while (total < len && !list_empty(&vvs->rx_queue)) {
+ pkt = list_first_entry(&vvs->rx_queue,
+ struct virtio_vsock_pkt, list);
+
@ -652,9 +705,17 @@ index 0000000..5b9e202
+ if (bytes > pkt->len - pkt->off)
+ bytes = pkt->len - pkt->off;
+
+ /* sk_lock is held by caller so no one else can dequeue.
+ * Unlock rx_lock since memcpy_to_msg() may sleep.
+ */
+ spin_unlock_bh(&vvs->rx_lock);
+
+ err = memcpy_to_msg(msg, pkt->buf + pkt->off, bytes);
+ if (err)
+ goto out;
+
+ spin_lock_bh(&vvs->rx_lock);
+
+ total += bytes;
+ pkt->off += bytes;
+ if (pkt->off == pkt->len) {
@ -663,7 +724,7 @@ index 0000000..5b9e202
+ virtio_transport_free_pkt(pkt);
+ }
+ }
+ mutex_unlock(&vvs->rx_lock);
+ spin_unlock_bh(&vvs->rx_lock);
+
+ /* Send a credit pkt to peer */
+ virtio_transport_send_credit_update(vsk, VIRTIO_VSOCK_TYPE_STREAM,
@ -672,7 +733,6 @@ index 0000000..5b9e202
+ return total;
+
+out:
+ mutex_unlock(&vvs->rx_lock);
+ if (total)
+ err = total;
+ return err;
@ -704,9 +764,9 @@ index 0000000..5b9e202
+ struct virtio_vsock_sock *vvs = vsk->trans;
+ s64 bytes;
+
+ mutex_lock(&vvs->rx_lock);
+ spin_lock_bh(&vvs->rx_lock);
+ bytes = vvs->rx_bytes;
+ mutex_unlock(&vvs->rx_lock);
+ spin_unlock_bh(&vvs->rx_lock);
+
+ return bytes;
+}
@ -729,9 +789,9 @@ index 0000000..5b9e202
+ struct virtio_vsock_sock *vvs = vsk->trans;
+ s64 bytes;
+
+ mutex_lock(&vvs->tx_lock);
+ spin_lock_bh(&vvs->tx_lock);
+ bytes = virtio_transport_has_space(vsk);
+ mutex_unlock(&vvs->tx_lock);
+ spin_unlock_bh(&vvs->tx_lock);
+
+ return bytes;
+}
@ -763,8 +823,8 @@ index 0000000..5b9e202
+
+ vvs->buf_alloc = vvs->buf_size;
+
+ mutex_init(&vvs->rx_lock);
+ mutex_init(&vvs->tx_lock);
+ spin_lock_init(&vvs->rx_lock);
+ spin_lock_init(&vvs->tx_lock);
+ INIT_LIST_HEAD(&vvs->rx_queue);
+
+ return 0;
@ -962,7 +1022,7 @@ index 0000000..5b9e202
+ .type = VIRTIO_VSOCK_TYPE_STREAM,
+ };
+
+ return virtio_transport_send_pkt(vsk, &info);
+ return virtio_transport_send_pkt_info(vsk, &info);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_connect);
+
@ -977,22 +1037,10 @@ index 0000000..5b9e202
+ VIRTIO_VSOCK_SHUTDOWN_SEND : 0),
+ };
+
+ return virtio_transport_send_pkt(vsk, &info);
+ return virtio_transport_send_pkt_info(vsk, &info);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_shutdown);
+
+void virtio_transport_release(struct vsock_sock *vsk)
+{
+ struct sock *sk = &vsk->sk;
+
+ /* Tell other side to terminate connection */
+ if (sk->sk_type == SOCK_STREAM &&
+ vsk->peer_shutdown != SHUTDOWN_MASK &&
+ sk->sk_state == SS_CONNECTED)
+ (void)virtio_transport_shutdown(vsk, SHUTDOWN_MASK);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_release);
+
+int
+virtio_transport_dgram_enqueue(struct vsock_sock *vsk,
+ struct sockaddr_vm *remote_addr,
@ -1015,7 +1063,7 @@ index 0000000..5b9e202
+ .pkt_len = len,
+ };
+
+ return virtio_transport_send_pkt(vsk, &info);
+ return virtio_transport_send_pkt_info(vsk, &info);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_stream_enqueue);
+
@ -1027,29 +1075,31 @@ index 0000000..5b9e202
+}
+EXPORT_SYMBOL_GPL(virtio_transport_destruct);
+
+static int virtio_transport_send_reset(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt *pkt)
+static int virtio_transport_reset(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt *pkt)
+{
+ struct virtio_vsock_pkt_info info = {
+ .op = VIRTIO_VSOCK_OP_RST,
+ .type = VIRTIO_VSOCK_TYPE_STREAM,
+ .reply = !!pkt,
+ };
+
+ /* Send RST only if the original pkt is not a RST pkt */
+ if (le16_to_cpu(pkt->hdr.op) == VIRTIO_VSOCK_OP_RST)
+ if (pkt && le16_to_cpu(pkt->hdr.op) == VIRTIO_VSOCK_OP_RST)
+ return 0;
+
+ return virtio_transport_send_pkt(vsk, &info);
+ return virtio_transport_send_pkt_info(vsk, &info);
+}
+
+/* Normally packets are associated with a socket. There may be no socket if an
+ * attempt was made to connect to a socket that does not exist.
+ */
+static int virtio_transport_send_reset_no_sock(struct virtio_vsock_pkt *pkt)
+static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
+{
+ struct virtio_vsock_pkt_info info = {
+ .op = VIRTIO_VSOCK_OP_RST,
+ .type = le16_to_cpu(pkt->hdr.type),
+ .reply = true,
+ };
+
+ /* Send RST only if the original pkt is not a RST pkt */
@ -1064,9 +1114,117 @@ index 0000000..5b9e202
+ if (!pkt)
+ return -ENOMEM;
+
+ return virtio_transport_send_pkt_no_sock(pkt);
+ return virtio_transport_get_ops()->send_pkt(pkt);
+}
+
+static void virtio_transport_wait_close(struct sock *sk, long timeout)
+{
+ if (timeout) {
+ DEFINE_WAIT(wait);
+
+ do {
+ prepare_to_wait(sk_sleep(sk), &wait,
+ TASK_INTERRUPTIBLE);
+ if (sk_wait_event(sk, &timeout,
+ sock_flag(sk, SOCK_DONE)))
+ break;
+ } while (!signal_pending(current) && timeout);
+
+ finish_wait(sk_sleep(sk), &wait);
+ }
+}
+
+static void virtio_transport_do_close(struct vsock_sock *vsk,
+ bool cancel_timeout)
+{
+ struct sock *sk = sk_vsock(vsk);
+
+ sock_set_flag(sk, SOCK_DONE);
+ vsk->peer_shutdown = SHUTDOWN_MASK;
+ if (vsock_stream_has_data(vsk) <= 0)
+ sk->sk_state = SS_DISCONNECTING;
+ sk->sk_state_change(sk);
+
+ if (vsk->close_work_scheduled &&
+ (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) {
+ vsk->close_work_scheduled = false;
+
+ vsock_remove_sock(vsk);
+
+ /* Release refcnt obtained when we scheduled the timeout */
+ sock_put(sk);
+ }
+}
+
+static void virtio_transport_close_timeout(struct work_struct *work)
+{
+ struct vsock_sock *vsk =
+ container_of(work, struct vsock_sock, close_work.work);
+ struct sock *sk = sk_vsock(vsk);
+
+ sock_hold(sk);
+ lock_sock(sk);
+
+ if (!sock_flag(sk, SOCK_DONE)) {
+ (void)virtio_transport_reset(vsk, NULL);
+
+ virtio_transport_do_close(vsk, false);
+ }
+
+ vsk->close_work_scheduled = false;
+
+ release_sock(sk);
+ sock_put(sk);
+}
+
+/* User context, vsk->sk is locked */
+static bool virtio_transport_close(struct vsock_sock *vsk)
+{
+ struct sock *sk = &vsk->sk;
+
+ if (!(sk->sk_state == SS_CONNECTED ||
+ sk->sk_state == SS_DISCONNECTING))
+ return true;
+
+ /* Already received SHUTDOWN from peer, reply with RST */
+ if ((vsk->peer_shutdown & SHUTDOWN_MASK) == SHUTDOWN_MASK) {
+ (void)virtio_transport_reset(vsk, NULL);
+ return true;
+ }
+
+ if ((sk->sk_shutdown & SHUTDOWN_MASK) != SHUTDOWN_MASK)
+ (void)virtio_transport_shutdown(vsk, SHUTDOWN_MASK);
+
+ if (sock_flag(sk, SOCK_LINGER) && !(current->flags & PF_EXITING))
+ virtio_transport_wait_close(sk, sk->sk_lingertime);
+
+ if (sock_flag(sk, SOCK_DONE)) {
+ return true;
+ }
+
+ sock_hold(sk);
+ INIT_DELAYED_WORK(&vsk->close_work,
+ virtio_transport_close_timeout);
+ vsk->close_work_scheduled = true;
+ schedule_delayed_work(&vsk->close_work, VSOCK_CLOSE_TIMEOUT);
+ return false;
+}
+
+void virtio_transport_release(struct vsock_sock *vsk)
+{
+ struct sock *sk = &vsk->sk;
+ bool remove_sock = true;
+
+ lock_sock(sk);
+ if (sk->sk_type == SOCK_STREAM)
+ remove_sock = virtio_transport_close(vsk);
+ release_sock(sk);
+
+ if (remove_sock)
+ vsock_remove_sock(vsk);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_release);
+
+static int
+virtio_transport_recv_connecting(struct sock *sk,
+ struct virtio_vsock_pkt *pkt)
@ -1096,7 +1254,7 @@ index 0000000..5b9e202
+ return 0;
+
+destroy:
+ virtio_transport_send_reset(vsk, pkt);
+ virtio_transport_reset(vsk, pkt);
+ sk->sk_state = SS_UNCONNECTED;
+ sk->sk_err = skerr;
+ sk->sk_error_report(sk);
@ -1116,10 +1274,10 @@ index 0000000..5b9e202
+ pkt->len = le32_to_cpu(pkt->hdr.len);
+ pkt->off = 0;
+
+ mutex_lock(&vvs->rx_lock);
+ spin_lock_bh(&vvs->rx_lock);
+ virtio_transport_inc_rx_pkt(vvs, pkt);
+ list_add_tail(&pkt->list, &vvs->rx_queue);
+ mutex_unlock(&vvs->rx_lock);
+ spin_unlock_bh(&vvs->rx_lock);
+
+ sk->sk_data_ready(sk);
+ return err;
@ -1138,11 +1296,7 @@ index 0000000..5b9e202
+ sk->sk_state_change(sk);
+ break;
+ case VIRTIO_VSOCK_OP_RST:
+ sock_set_flag(sk, SOCK_DONE);
+ vsk->peer_shutdown = SHUTDOWN_MASK;
+ if (vsock_stream_has_data(vsk) <= 0)
+ sk->sk_state = SS_DISCONNECTING;
+ sk->sk_state_change(sk);
+ virtio_transport_do_close(vsk, true);
+ break;
+ default:
+ err = -EINVAL;
@ -1153,6 +1307,16 @@ index 0000000..5b9e202
+ return err;
+}
+
+static void
+virtio_transport_recv_disconnecting(struct sock *sk,
+ struct virtio_vsock_pkt *pkt)
+{
+ struct vsock_sock *vsk = vsock_sk(sk);
+
+ if (le16_to_cpu(pkt->hdr.op) == VIRTIO_VSOCK_OP_RST)
+ virtio_transport_do_close(vsk, true);
+}
+
+static int
+virtio_transport_send_response(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt *pkt)
@ -1162,9 +1326,10 @@ index 0000000..5b9e202
+ .type = VIRTIO_VSOCK_TYPE_STREAM,
+ .remote_cid = le32_to_cpu(pkt->hdr.src_cid),
+ .remote_port = le32_to_cpu(pkt->hdr.src_port),
+ .reply = true,
+ };
+
+ return virtio_transport_send_pkt(vsk, &info);
+ return virtio_transport_send_pkt_info(vsk, &info);
+}
+
+/* Handle server socket */
@ -1176,25 +1341,25 @@ index 0000000..5b9e202
+ struct sock *child;
+
+ if (le16_to_cpu(pkt->hdr.op) != VIRTIO_VSOCK_OP_REQUEST) {
+ virtio_transport_send_reset(vsk, pkt);
+ virtio_transport_reset(vsk, pkt);
+ return -EINVAL;
+ }
+
+ if (sk_acceptq_is_full(sk)) {
+ virtio_transport_send_reset(vsk, pkt);
+ virtio_transport_reset(vsk, pkt);
+ return -ENOMEM;
+ }
+
+ child = __vsock_create(sock_net(sk), NULL, sk, GFP_KERNEL,
+ sk->sk_type, 0);
+ if (!child) {
+ virtio_transport_send_reset(vsk, pkt);
+ virtio_transport_reset(vsk, pkt);
+ return -ENOMEM;
+ }
+
+ sk->sk_ack_backlog++;
+
+ lock_sock(child);
+ lock_sock_nested(child, SINGLE_DEPTH_NESTING);
+
+ child->sk_state = SS_CONNECTED;
+
@ -1214,7 +1379,7 @@ index 0000000..5b9e202
+ return 0;
+}
+
+static void virtio_transport_space_update(struct sock *sk,
+static bool virtio_transport_space_update(struct sock *sk,
+ struct virtio_vsock_pkt *pkt)
+{
+ struct vsock_sock *vsk = vsock_sk(sk);
@ -1222,14 +1387,12 @@ index 0000000..5b9e202
+ bool space_available;
+
+ /* buf_alloc and fwd_cnt is always included in the hdr */
+ mutex_lock(&vvs->tx_lock);
+ spin_lock_bh(&vvs->tx_lock);
+ vvs->peer_buf_alloc = le32_to_cpu(pkt->hdr.buf_alloc);
+ vvs->peer_fwd_cnt = le32_to_cpu(pkt->hdr.fwd_cnt);
+ space_available = virtio_transport_has_space(vsk);
+ mutex_unlock(&vvs->tx_lock);
+
+ if (space_available)
+ sk->sk_write_space(sk);
+ spin_unlock_bh(&vvs->tx_lock);
+ return space_available;
+}
+
+/* We are under the virtio-vsock's vsock->rx_lock or vhost-vsock's vq->mutex
@ -1240,6 +1403,7 @@ index 0000000..5b9e202
+ struct sockaddr_vm src, dst;
+ struct vsock_sock *vsk;
+ struct sock *sk;
+ bool space_available;
+
+ vsock_addr_init(&src, le32_to_cpu(pkt->hdr.src_cid),
+ le32_to_cpu(pkt->hdr.src_port));
@ -1256,7 +1420,7 @@ index 0000000..5b9e202
+ le32_to_cpu(pkt->hdr.fwd_cnt));
+
+ if (le16_to_cpu(pkt->hdr.type) != VIRTIO_VSOCK_TYPE_STREAM) {
+ (void)virtio_transport_send_reset_no_sock(pkt);
+ (void)virtio_transport_reset_no_sock(pkt);
+ goto free_pkt;
+ }
+
@ -1267,20 +1431,23 @@ index 0000000..5b9e202
+ if (!sk) {
+ sk = vsock_find_bound_socket(&dst);
+ if (!sk) {
+ (void)virtio_transport_send_reset_no_sock(pkt);
+ (void)virtio_transport_reset_no_sock(pkt);
+ goto free_pkt;
+ }
+ }
+
+ vsk = vsock_sk(sk);
+
+ virtio_transport_space_update(sk, pkt);
+ space_available = virtio_transport_space_update(sk, pkt);
+
+ lock_sock(sk);
+
+ /* Update CID in case it has changed after a transport reset event */
+ vsk->local_addr.svm_cid = dst.svm_cid;
+
+ if (space_available)
+ sk->sk_write_space(sk);
+
+ switch (sk->sk_state) {
+ case VSOCK_SS_LISTEN:
+ virtio_transport_recv_listen(sk, pkt);
@ -1293,6 +1460,10 @@ index 0000000..5b9e202
+ case SS_CONNECTED:
+ virtio_transport_recv_connected(sk, pkt);
+ break;
+ case SS_DISCONNECTING:
+ virtio_transport_recv_disconnecting(sk, pkt);
+ virtio_transport_free_pkt(pkt);
+ break;
+ default:
+ virtio_transport_free_pkt(pkt);
+ break;
@ -1321,5 +1492,5 @@ index 0000000..5b9e202
+MODULE_AUTHOR("Asias He");
+MODULE_DESCRIPTION("common code for virtio vsock");
--
2.9.0
2.9.3

View File

@ -1,24 +1,26 @@
From 425faa8655fbbe9191ddc88fe57097e8be2fdf44 Mon Sep 17 00:00:00 2001
From e21bdf5a9a62fbdabddca3cafc8aa2b02f2cc165 Mon Sep 17 00:00:00 2001
From: Asias He <asias@redhat.com>
Date: Thu, 13 Jun 2013 18:28:48 +0800
Subject: [PATCH 06/40] VSOCK: Introduce virtio_transport.ko
Date: Thu, 28 Jul 2016 15:36:33 +0100
Subject: [PATCH 08/46] VSOCK: Introduce virtio_transport.ko
VM sockets virtio transport implementation. This driver runs in the
guest.
Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 0ea9e1d3a9e3ef7d2a1462d3de6b95131dc7d872)
---
MAINTAINERS | 1 +
net/vmw_vsock/virtio_transport.c | 584 +++++++++++++++++++++++++++++++++++++++
2 files changed, 585 insertions(+)
net/vmw_vsock/virtio_transport.c | 624 +++++++++++++++++++++++++++++++++++++++
2 files changed, 625 insertions(+)
create mode 100644 net/vmw_vsock/virtio_transport.c
diff --git a/MAINTAINERS b/MAINTAINERS
index b93ba8b..82d1123 100644
index 3e60f59..c7e4c9a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11391,6 +11391,7 @@ S: Maintained
@@ -11404,6 +11404,7 @@ S: Maintained
F: include/linux/virtio_vsock.h
F: include/uapi/linux/virtio_vsock.h
F: net/vmw_vsock/virtio_transport_common.c
@ -28,10 +30,10 @@ index b93ba8b..82d1123 100644
M: Stephen Chandler Paul <thatslyude@gmail.com>
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
new file mode 100644
index 0000000..45472e0
index 0000000..699dfab
--- /dev/null
+++ b/net/vmw_vsock/virtio_transport.c
@@ -0,0 +1,584 @@
@@ -0,0 +1,624 @@
+/*
+ * virtio transport for vsock
+ *
@ -47,6 +49,7 @@ index 0000000..45472e0
+#include <linux/spinlock.h>
+#include <linux/module.h>
+#include <linux/list.h>
+#include <linux/atomic.h>
+#include <linux/virtio.h>
+#include <linux/virtio_ids.h>
+#include <linux/virtio_config.h>
@ -58,7 +61,6 @@ index 0000000..45472e0
+static struct workqueue_struct *virtio_vsock_workqueue;
+static struct virtio_vsock *the_virtio_vsock;
+static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */
+static void virtio_vsock_rx_fill(struct virtio_vsock *vsock);
+
+struct virtio_vsock {
+ struct virtio_device *vdev;
@ -69,13 +71,16 @@ index 0000000..45472e0
+ struct work_struct rx_work;
+ struct work_struct event_work;
+
+ wait_queue_head_t tx_wait; /* for waiting for tx resources */
+
+ /* The following fields are protected by tx_lock. vqs[VSOCK_VQ_TX]
+ * must be accessed with tx_lock held.
+ */
+ struct mutex tx_lock;
+ u32 total_tx_buf;
+
+ struct work_struct send_pkt_work;
+ spinlock_t send_pkt_list_lock;
+ struct list_head send_pkt_list;
+
+ atomic_t queued_replies;
+
+ /* The following fields are protected by rx_lock. vqs[VSOCK_VQ_RX]
+ * must be accessed with rx_lock held.
@ -105,45 +110,88 @@ index 0000000..45472e0
+ return vsock->guest_cid;
+}
+
+static int
+virtio_transport_send_one_pkt(struct virtio_vsock *vsock,
+ struct virtio_vsock_pkt *pkt)
+static void
+virtio_transport_send_pkt_work(struct work_struct *work)
+{
+ struct scatterlist hdr, buf, *sgs[2];
+ int ret, in_sg = 0, out_sg = 0;
+ struct virtio_vsock *vsock =
+ container_of(work, struct virtio_vsock, send_pkt_work);
+ struct virtqueue *vq;
+ DEFINE_WAIT(wait);
+ bool added = false;
+ bool restart_rx = false;
+
+ mutex_lock(&vsock->tx_lock);
+
+ vq = vsock->vqs[VSOCK_VQ_TX];
+
+ /* Put pkt in the virtqueue */
+ sg_init_one(&hdr, &pkt->hdr, sizeof(pkt->hdr));
+ sgs[out_sg++] = &hdr;
+ if (pkt->buf) {
+ sg_init_one(&buf, pkt->buf, pkt->len);
+ sgs[out_sg++] = &buf;
+ /* Avoid unnecessary interrupts while we're processing the ring */
+ virtqueue_disable_cb(vq);
+
+ for (;;) {
+ struct virtio_vsock_pkt *pkt;
+ struct scatterlist hdr, buf, *sgs[2];
+ int ret, in_sg = 0, out_sg = 0;
+ bool reply;
+
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ if (list_empty(&vsock->send_pkt_list)) {
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+ virtqueue_enable_cb(vq);
+ break;
+ }
+
+ pkt = list_first_entry(&vsock->send_pkt_list,
+ struct virtio_vsock_pkt, list);
+ list_del_init(&pkt->list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ reply = pkt->reply;
+
+ sg_init_one(&hdr, &pkt->hdr, sizeof(pkt->hdr));
+ sgs[out_sg++] = &hdr;
+ if (pkt->buf) {
+ sg_init_one(&buf, pkt->buf, pkt->len);
+ sgs[out_sg++] = &buf;
+ }
+
+ ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, pkt, GFP_KERNEL);
+ if (ret < 0) {
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ list_add(&pkt->list, &vsock->send_pkt_list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ if (!virtqueue_enable_cb(vq) && ret == -ENOSPC)
+ continue; /* retry now that we have more space */
+ break;
+ }
+
+ if (reply) {
+ struct virtqueue *rx_vq = vsock->vqs[VSOCK_VQ_RX];
+ int val;
+
+ val = atomic_dec_return(&vsock->queued_replies);
+
+ /* Do we now have resources to resume rx processing? */
+ if (val + 1 == virtqueue_get_vring_size(rx_vq))
+ restart_rx = true;
+ }
+
+ added = true;
+ }
+
+ mutex_lock(&vsock->tx_lock);
+ while ((ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, pkt,
+ GFP_KERNEL)) < 0) {
+ prepare_to_wait_exclusive(&vsock->tx_wait, &wait,
+ TASK_UNINTERRUPTIBLE);
+ mutex_unlock(&vsock->tx_lock);
+ schedule();
+ mutex_lock(&vsock->tx_lock);
+ finish_wait(&vsock->tx_wait, &wait);
+ }
+ virtqueue_kick(vq);
+ if (added)
+ virtqueue_kick(vq);
+
+ mutex_unlock(&vsock->tx_lock);
+
+ return pkt->len;
+ if (restart_rx)
+ queue_work(virtio_vsock_workqueue, &vsock->rx_work);
+}
+
+static int
+virtio_transport_send_pkt_no_sock(struct virtio_vsock_pkt *pkt)
+virtio_transport_send_pkt(struct virtio_vsock_pkt *pkt)
+{
+ struct virtio_vsock *vsock;
+ int len = pkt->len;
+
+ vsock = virtio_vsock_get();
+ if (!vsock) {
@ -151,71 +199,15 @@ index 0000000..45472e0
+ return -ENODEV;
+ }
+
+ return virtio_transport_send_one_pkt(vsock, pkt);
+}
+ if (pkt->reply)
+ atomic_inc(&vsock->queued_replies);
+
+static int
+virtio_transport_send_pkt(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt_info *info)
+{
+ u32 src_cid, src_port, dst_cid, dst_port;
+ struct virtio_vsock_sock *vvs;
+ struct virtio_vsock_pkt *pkt;
+ struct virtio_vsock *vsock;
+ u32 pkt_len = info->pkt_len;
+ DEFINE_WAIT(wait);
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ list_add_tail(&pkt->list, &vsock->send_pkt_list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ vsock = virtio_vsock_get();
+ if (!vsock)
+ return -ENODEV;
+
+ src_cid = virtio_transport_get_local_cid();
+ src_port = vsk->local_addr.svm_port;
+ if (!info->remote_cid) {
+ dst_cid = vsk->remote_addr.svm_cid;
+ dst_port = vsk->remote_addr.svm_port;
+ } else {
+ dst_cid = info->remote_cid;
+ dst_port = info->remote_port;
+ }
+
+ vvs = vsk->trans;
+
+ if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE)
+ pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
+ pkt_len = virtio_transport_get_credit(vvs, pkt_len);
+ /* Do not send zero length OP_RW pkt*/
+ if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW)
+ return pkt_len;
+
+ /* Respect global tx buf limitation */
+ mutex_lock(&vsock->tx_lock);
+ while (pkt_len + vsock->total_tx_buf > VIRTIO_VSOCK_MAX_TX_BUF_SIZE) {
+ prepare_to_wait_exclusive(&vsock->tx_wait, &wait,
+ TASK_UNINTERRUPTIBLE);
+ mutex_unlock(&vsock->tx_lock);
+ schedule();
+ mutex_lock(&vsock->tx_lock);
+ finish_wait(&vsock->tx_wait, &wait);
+ }
+ vsock->total_tx_buf += pkt_len;
+ mutex_unlock(&vsock->tx_lock);
+
+ pkt = virtio_transport_alloc_pkt(info, pkt_len,
+ src_cid, src_port,
+ dst_cid, dst_port);
+ if (!pkt) {
+ mutex_lock(&vsock->tx_lock);
+ vsock->total_tx_buf -= pkt_len;
+ mutex_unlock(&vsock->tx_lock);
+ virtio_transport_put_credit(vvs, pkt_len);
+ wake_up(&vsock->tx_wait);
+ return -ENOMEM;
+ }
+
+ virtio_transport_inc_tx_pkt(vvs, pkt);
+
+ return virtio_transport_send_one_pkt(vsock, pkt);
+ queue_work(virtio_vsock_workqueue, &vsock->send_pkt_work);
+ return len;
+}
+
+static void virtio_vsock_rx_fill(struct virtio_vsock *vsock)
@ -258,7 +250,7 @@ index 0000000..45472e0
+ virtqueue_kick(vq);
+}
+
+static void virtio_transport_send_pkt_work(struct work_struct *work)
+static void virtio_transport_tx_work(struct work_struct *work)
+{
+ struct virtio_vsock *vsock =
+ container_of(work, struct virtio_vsock, tx_work);
@ -273,7 +265,6 @@ index 0000000..45472e0
+
+ virtqueue_disable_cb(vq);
+ while ((pkt = virtqueue_get_buf(vq, &len)) != NULL) {
+ vsock->total_tx_buf -= pkt->len;
+ virtio_transport_free_pkt(pkt);
+ added = true;
+ }
@ -281,23 +272,50 @@ index 0000000..45472e0
+ mutex_unlock(&vsock->tx_lock);
+
+ if (added)
+ wake_up(&vsock->tx_wait);
+ queue_work(virtio_vsock_workqueue, &vsock->send_pkt_work);
+}
+
+static void virtio_transport_recv_pkt_work(struct work_struct *work)
+/* Is there space left for replies to rx packets? */
+static bool virtio_transport_more_replies(struct virtio_vsock *vsock)
+{
+ struct virtqueue *vq = vsock->vqs[VSOCK_VQ_RX];
+ int val;
+
+ smp_rmb(); /* paired with atomic_inc() and atomic_dec_return() */
+ val = atomic_read(&vsock->queued_replies);
+
+ return val < virtqueue_get_vring_size(vq);
+}
+
+static void virtio_transport_rx_work(struct work_struct *work)
+{
+ struct virtio_vsock *vsock =
+ container_of(work, struct virtio_vsock, rx_work);
+ struct virtqueue *vq;
+
+ vq = vsock->vqs[VSOCK_VQ_RX];
+ mutex_lock(&vsock->rx_lock);
+ do {
+ struct virtio_vsock_pkt *pkt;
+ unsigned int len;
+
+ mutex_lock(&vsock->rx_lock);
+
+ do {
+ virtqueue_disable_cb(vq);
+ while ((pkt = virtqueue_get_buf(vq, &len)) != NULL) {
+ for (;;) {
+ struct virtio_vsock_pkt *pkt;
+ unsigned int len;
+
+ if (!virtio_transport_more_replies(vsock)) {
+ /* Stop rx until the device processes already
+ * pending replies. Leave rx virtqueue
+ * callbacks disabled.
+ */
+ goto out;
+ }
+
+ pkt = virtqueue_get_buf(vq, &len);
+ if (!pkt) {
+ break;
+ }
+
+ vsock->rx_buf_nr--;
+
+ /* Drop short/long packets */
@ -312,6 +330,7 @@ index 0000000..45472e0
+ }
+ } while (!virtqueue_enable_cb(vq));
+
+out:
+ if (vsock->rx_buf_nr < vsock->rx_buf_max_nr / 2)
+ virtio_vsock_rx_fill(vsock);
+ mutex_unlock(&vsock->rx_lock);
@ -357,11 +376,11 @@ index 0000000..45472e0
+static void virtio_vsock_update_guest_cid(struct virtio_vsock *vsock)
+{
+ struct virtio_device *vdev = vsock->vdev;
+ u32 guest_cid;
+ u64 guest_cid;
+
+ vdev->config->get(vdev, offsetof(struct virtio_vsock_config, guest_cid),
+ &guest_cid, sizeof(guest_cid));
+ vsock->guest_cid = le32_to_cpu(guest_cid);
+ vsock->guest_cid = le64_to_cpu(guest_cid);
+}
+
+/* event_lock must be held */
@ -473,8 +492,7 @@ index 0000000..45472e0
+ .get_max_buffer_size = virtio_transport_get_max_buffer_size,
+ },
+
+ .send_pkt = virtio_transport_send_pkt,
+ .send_pkt_no_sock = virtio_transport_send_pkt_no_sock,
+ .send_pkt = virtio_transport_send_pkt,
+};
+
+static int virtio_vsock_probe(struct virtio_device *vdev)
@ -523,16 +541,19 @@ index 0000000..45472e0
+
+ vsock->rx_buf_nr = 0;
+ vsock->rx_buf_max_nr = 0;
+ atomic_set(&vsock->queued_replies, 0);
+
+ vdev->priv = vsock;
+ the_virtio_vsock = vsock;
+ init_waitqueue_head(&vsock->tx_wait);
+ mutex_init(&vsock->tx_lock);
+ mutex_init(&vsock->rx_lock);
+ mutex_init(&vsock->event_lock);
+ INIT_WORK(&vsock->rx_work, virtio_transport_recv_pkt_work);
+ INIT_WORK(&vsock->tx_work, virtio_transport_send_pkt_work);
+ spin_lock_init(&vsock->send_pkt_list_lock);
+ INIT_LIST_HEAD(&vsock->send_pkt_list);
+ INIT_WORK(&vsock->rx_work, virtio_transport_rx_work);
+ INIT_WORK(&vsock->tx_work, virtio_transport_tx_work);
+ INIT_WORK(&vsock->event_work, virtio_transport_event_work);
+ INIT_WORK(&vsock->send_pkt_work, virtio_transport_send_pkt_work);
+
+ mutex_lock(&vsock->rx_lock);
+ virtio_vsock_rx_fill(vsock);
@ -556,13 +577,34 @@ index 0000000..45472e0
+static void virtio_vsock_remove(struct virtio_device *vdev)
+{
+ struct virtio_vsock *vsock = vdev->priv;
+ struct virtio_vsock_pkt *pkt;
+
+ flush_work(&vsock->rx_work);
+ flush_work(&vsock->tx_work);
+ flush_work(&vsock->event_work);
+ flush_work(&vsock->send_pkt_work);
+
+ vdev->config->reset(vdev);
+
+ mutex_lock(&vsock->rx_lock);
+ while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_RX])))
+ virtio_transport_free_pkt(pkt);
+ mutex_unlock(&vsock->rx_lock);
+
+ mutex_lock(&vsock->tx_lock);
+ while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_TX])))
+ virtio_transport_free_pkt(pkt);
+ mutex_unlock(&vsock->tx_lock);
+
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ while (!list_empty(&vsock->send_pkt_list)) {
+ pkt = list_first_entry(&vsock->send_pkt_list,
+ struct virtio_vsock_pkt, list);
+ list_del(&pkt->list);
+ virtio_transport_free_pkt(pkt);
+ }
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ mutex_lock(&the_virtio_vsock_mutex);
+ the_virtio_vsock = NULL;
+ vsock_core_exit();
@ -617,5 +659,5 @@ index 0000000..45472e0
+MODULE_DESCRIPTION("virtio transport for vsock");
+MODULE_DEVICE_TABLE(virtio, id_table);
--
2.9.0
2.9.3

View File

@ -1,26 +1,27 @@
From b83fd98b14a95b6ae4b5eee5c565e645aee41442 Mon Sep 17 00:00:00 2001
From 7cba4e91a9e1359345ccb7dbff243cedf5b5e6ea Mon Sep 17 00:00:00 2001
From: Asias He <asias@redhat.com>
Date: Thu, 13 Jun 2013 18:29:21 +0800
Subject: [PATCH 07/40] VSOCK: Introduce vhost_vsock.ko
Date: Thu, 28 Jul 2016 15:36:34 +0100
Subject: [PATCH 09/46] VSOCK: Introduce vhost_vsock.ko
VM sockets vhost transport implementation. This driver runs on the
host.
Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 433fc58e6bf2c8bd97e57153ed28e64fd78207b8)
---
MAINTAINERS | 2 +
drivers/vhost/vsock.c | 694 ++++++++++++++++++++++++++++++++++++++++++++++++++
drivers/vhost/vsock.h | 5 +
3 files changed, 701 insertions(+)
MAINTAINERS | 2 +
drivers/vhost/vsock.c | 722 +++++++++++++++++++++++++++++++++++++++++++++
include/uapi/linux/vhost.h | 5 +
3 files changed, 729 insertions(+)
create mode 100644 drivers/vhost/vsock.c
create mode 100644 drivers/vhost/vsock.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 82d1123..12d49f5 100644
index c7e4c9a..fa94182 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11392,6 +11392,8 @@ F: include/linux/virtio_vsock.h
@@ -11405,6 +11405,8 @@ F: include/linux/virtio_vsock.h
F: include/uapi/linux/virtio_vsock.h
F: net/vmw_vsock/virtio_transport_common.c
F: net/vmw_vsock/virtio_transport.c
@ -31,10 +32,10 @@ index 82d1123..12d49f5 100644
M: Stephen Chandler Paul <thatslyude@gmail.com>
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
new file mode 100644
index 0000000..8488d01
index 0000000..028ca16
--- /dev/null
+++ b/drivers/vhost/vsock.c
@@ -0,0 +1,694 @@
@@ -0,0 +1,722 @@
+/*
+ * vhost transport for vsock
+ *
@ -45,15 +46,16 @@ index 0000000..8488d01
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ */
+#include <linux/miscdevice.h>
+#include <linux/atomic.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/vmalloc.h>
+#include <net/sock.h>
+#include <linux/virtio_vsock.h>
+#include <linux/vhost.h>
+
+#include <net/af_vsock.h>
+#include "vhost.h"
+#include "vsock.h"
+
+#define VHOST_VSOCK_DEFAULT_HOST_CID 2
+
@ -62,22 +64,21 @@ index 0000000..8488d01
+};
+
+/* Used to track all the vhost_vsock instances on the system. */
+static DEFINE_SPINLOCK(vhost_vsock_lock);
+static LIST_HEAD(vhost_vsock_list);
+static DEFINE_MUTEX(vhost_vsock_mutex);
+
+struct vhost_vsock {
+ struct vhost_dev dev;
+ struct vhost_virtqueue vqs[2];
+
+ /* Link to global vhost_vsock_list, protected by vhost_vsock_mutex */
+ /* Link to global vhost_vsock_list, protected by vhost_vsock_lock */
+ struct list_head list;
+
+ struct vhost_work send_pkt_work;
+ wait_queue_head_t send_wait;
+
+ /* Fields protected by vqs[VSOCK_VQ_RX].mutex */
+ spinlock_t send_pkt_list_lock;
+ struct list_head send_pkt_list; /* host->guest pending packets */
+ u32 total_tx_buf;
+
+ atomic_t queued_replies;
+
+ u32 guest_cid;
+};
@ -91,7 +92,7 @@ index 0000000..8488d01
+{
+ struct vhost_vsock *vsock;
+
+ mutex_lock(&vhost_vsock_mutex);
+ spin_lock_bh(&vhost_vsock_lock);
+ list_for_each_entry(vsock, &vhost_vsock_list, list) {
+ u32 other_cid = vsock->guest_cid;
+
@ -100,11 +101,11 @@ index 0000000..8488d01
+ continue;
+
+ if (other_cid == guest_cid) {
+ mutex_unlock(&vhost_vsock_mutex);
+ spin_unlock_bh(&vhost_vsock_lock);
+ return vsock;
+ }
+ }
+ mutex_unlock(&vhost_vsock_mutex);
+ spin_unlock_bh(&vhost_vsock_lock);
+
+ return NULL;
+}
@ -113,10 +114,15 @@ index 0000000..8488d01
+vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
+ struct vhost_virtqueue *vq)
+{
+ struct vhost_virtqueue *tx_vq = &vsock->vqs[VSOCK_VQ_TX];
+ bool added = false;
+ bool restart_tx = false;
+
+ mutex_lock(&vq->mutex);
+
+ if (!vq->private_data)
+ goto out;
+
+ /* Avoid further vmexits, we're already processing the virtqueue */
+ vhost_disable_notify(&vsock->dev, vq);
+
@ -128,17 +134,32 @@ index 0000000..8488d01
+ size_t len;
+ int head;
+
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ if (list_empty(&vsock->send_pkt_list)) {
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+ vhost_enable_notify(&vsock->dev, vq);
+ break;
+ }
+
+ pkt = list_first_entry(&vsock->send_pkt_list,
+ struct virtio_vsock_pkt, list);
+ list_del_init(&pkt->list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+ &out, &in, NULL, NULL);
+ if (head < 0)
+ if (head < 0) {
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ list_add(&pkt->list, &vsock->send_pkt_list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+ break;
+ }
+
+ if (head == vq->num) {
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ list_add(&pkt->list, &vsock->send_pkt_list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ /* We cannot finish yet if more buffers snuck in while
+ * re-enabling notify.
+ */
@ -149,10 +170,6 @@ index 0000000..8488d01
+ break;
+ }
+
+ pkt = list_first_entry(&vsock->send_pkt_list,
+ struct virtio_vsock_pkt, list);
+ list_del_init(&pkt->list);
+
+ if (out) {
+ virtio_transport_free_pkt(pkt);
+ vq_err(vq, "Expected 0 output buffers, got %u\n", out);
@ -179,16 +196,26 @@ index 0000000..8488d01
+ vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len);
+ added = true;
+
+ vsock->total_tx_buf -= pkt->len;
+ if (pkt->reply) {
+ int val;
+
+ val = atomic_dec_return(&vsock->queued_replies);
+
+ /* Do we have resources to resume tx processing? */
+ if (val + 1 == tx_vq->num)
+ restart_tx = true;
+ }
+
+ virtio_transport_free_pkt(pkt);
+ }
+ if (added)
+ vhost_signal(&vsock->dev, vq);
+
+out:
+ mutex_unlock(&vq->mutex);
+
+ if (added)
+ wake_up(&vsock->send_wait);
+ if (restart_tx)
+ vhost_poll_queue(&tx_vq->poll);
+}
+
+static void vhost_transport_send_pkt_work(struct vhost_work *work)
@ -203,104 +230,30 @@ index 0000000..8488d01
+}
+
+static int
+vhost_transport_send_one_pkt(struct vhost_vsock *vsock,
+ struct virtio_vsock_pkt *pkt)
+{
+ struct vhost_virtqueue *vq = &vsock->vqs[VSOCK_VQ_RX];
+
+ /* Queue it up in vhost work */
+ mutex_lock(&vq->mutex);
+ list_add_tail(&pkt->list, &vsock->send_pkt_list);
+ vhost_work_queue(&vsock->dev, &vsock->send_pkt_work);
+ mutex_unlock(&vq->mutex);
+
+ return pkt->len;
+}
+
+static int
+vhost_transport_send_pkt_no_sock(struct virtio_vsock_pkt *pkt)
+vhost_transport_send_pkt(struct virtio_vsock_pkt *pkt)
+{
+ struct vhost_vsock *vsock;
+ struct vhost_virtqueue *vq;
+ int len = pkt->len;
+
+ /* Find the vhost_vsock according to guest context id */
+ vsock = vhost_vsock_get(le32_to_cpu(pkt->hdr.dst_cid));
+ vsock = vhost_vsock_get(le64_to_cpu(pkt->hdr.dst_cid));
+ if (!vsock) {
+ virtio_transport_free_pkt(pkt);
+ return -ENODEV;
+ }
+
+ return vhost_transport_send_one_pkt(vsock, pkt);
+}
+
+static int
+vhost_transport_send_pkt(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt_info *info)
+{
+ u32 src_cid, src_port, dst_cid, dst_port;
+ struct virtio_vsock_sock *vvs;
+ struct virtio_vsock_pkt *pkt;
+ struct vhost_virtqueue *vq;
+ struct vhost_vsock *vsock;
+ u32 pkt_len = info->pkt_len;
+ DEFINE_WAIT(wait);
+
+ src_cid = vhost_transport_get_local_cid();
+ src_port = vsk->local_addr.svm_port;
+ if (!info->remote_cid) {
+ dst_cid = vsk->remote_addr.svm_cid;
+ dst_port = vsk->remote_addr.svm_port;
+ } else {
+ dst_cid = info->remote_cid;
+ dst_port = info->remote_port;
+ }
+
+ /* Find the vhost_vsock according to guest context id */
+ vsock = vhost_vsock_get(dst_cid);
+ if (!vsock)
+ return -ENODEV;
+
+ vvs = vsk->trans;
+ vq = &vsock->vqs[VSOCK_VQ_RX];
+
+ /* we can send less than pkt_len bytes */
+ if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE)
+ pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
+ if (pkt->reply)
+ atomic_inc(&vsock->queued_replies);
+
+ /* virtio_transport_get_credit might return less than pkt_len credit */
+ pkt_len = virtio_transport_get_credit(vvs, pkt_len);
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ list_add_tail(&pkt->list, &vsock->send_pkt_list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ /* Do not send zero length OP_RW pkt*/
+ if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW)
+ return pkt_len;
+
+ /* Respect global tx buf limitation */
+ mutex_lock(&vq->mutex);
+ while (pkt_len + vsock->total_tx_buf > VIRTIO_VSOCK_MAX_TX_BUF_SIZE) {
+ prepare_to_wait_exclusive(&vsock->send_wait, &wait,
+ TASK_UNINTERRUPTIBLE);
+ mutex_unlock(&vq->mutex);
+ schedule();
+ mutex_lock(&vq->mutex);
+ finish_wait(&vsock->send_wait, &wait);
+ }
+ vsock->total_tx_buf += pkt_len;
+ mutex_unlock(&vq->mutex);
+
+ pkt = virtio_transport_alloc_pkt(info, pkt_len,
+ src_cid, src_port,
+ dst_cid, dst_port);
+ if (!pkt) {
+ mutex_lock(&vq->mutex);
+ vsock->total_tx_buf -= pkt_len;
+ mutex_unlock(&vq->mutex);
+ virtio_transport_put_credit(vvs, pkt_len);
+ wake_up(&vsock->send_wait);
+ return -ENOMEM;
+ }
+
+ virtio_transport_inc_tx_pkt(vvs, pkt);
+
+ return vhost_transport_send_one_pkt(vsock, pkt);
+ vhost_work_queue(&vsock->dev, &vsock->send_pkt_work);
+ return len;
+}
+
+static struct virtio_vsock_pkt *
@ -362,6 +315,18 @@ index 0000000..8488d01
+ return pkt;
+}
+
+/* Is there space left for replies to rx packets? */
+static bool vhost_vsock_more_replies(struct vhost_vsock *vsock)
+{
+ struct vhost_virtqueue *vq = &vsock->vqs[VSOCK_VQ_TX];
+ int val;
+
+ smp_rmb(); /* paired with atomic_inc() and atomic_dec_return() */
+ val = atomic_read(&vsock->queued_replies);
+
+ return val < vq->num;
+}
+
+static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
+{
+ struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
@ -374,8 +339,20 @@ index 0000000..8488d01
+ bool added = false;
+
+ mutex_lock(&vq->mutex);
+
+ if (!vq->private_data)
+ goto out;
+
+ vhost_disable_notify(&vsock->dev, vq);
+ for (;;) {
+ if (!vhost_vsock_more_replies(vsock)) {
+ /* Stop tx until the device processes already
+ * pending replies. Leave tx virtqueue
+ * callbacks disabled.
+ */
+ goto no_more_replies;
+ }
+
+ head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+ &out, &in, NULL, NULL);
+ if (head < 0)
@ -396,7 +373,7 @@ index 0000000..8488d01
+ }
+
+ /* Only accept correctly addressed packets */
+ if (le32_to_cpu(pkt->hdr.src_cid) == vsock->guest_cid)
+ if (le64_to_cpu(pkt->hdr.src_cid) == vsock->guest_cid)
+ virtio_transport_recv_pkt(pkt);
+ else
+ virtio_transport_free_pkt(pkt);
@ -404,8 +381,12 @@ index 0000000..8488d01
+ vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len);
+ added = true;
+ }
+
+no_more_replies:
+ if (added)
+ vhost_signal(&vsock->dev, vq);
+
+out:
+ mutex_unlock(&vq->mutex);
+}
+
@ -465,21 +446,36 @@ index 0000000..8488d01
+ return ret;
+}
+
+static void vhost_vsock_stop(struct vhost_vsock *vsock)
+static int vhost_vsock_stop(struct vhost_vsock *vsock)
+{
+ size_t i;
+ int ret;
+
+ mutex_lock(&vsock->dev.mutex);
+
+ ret = vhost_dev_check_owner(&vsock->dev);
+ if (ret)
+ goto err;
+
+ for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) {
+ struct vhost_virtqueue *vq = &vsock->vqs[i];
+
+ mutex_lock(&vq->mutex);
+ vq->private_data = vsock;
+ vq->private_data = NULL;
+ mutex_unlock(&vq->mutex);
+ }
+
+err:
+ mutex_unlock(&vsock->dev.mutex);
+ return ret;
+}
+
+static void vhost_vsock_free(struct vhost_vsock *vsock)
+{
+ if (is_vmalloc_addr(vsock))
+ vfree(vsock);
+ else
+ kfree(vsock);
+}
+
+static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
@ -488,9 +484,15 @@ index 0000000..8488d01
+ struct vhost_vsock *vsock;
+ int ret;
+
+ vsock = kzalloc(sizeof(*vsock), GFP_KERNEL);
+ if (!vsock)
+ return -ENOMEM;
+ /* This struct is large and allocation could fail, fall back to vmalloc
+ * if there is no other way.
+ */
+ vsock = kzalloc(sizeof(*vsock), GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
+ if (!vsock) {
+ vsock = vmalloc(sizeof(*vsock));
+ if (!vsock)
+ return -ENOMEM;
+ }
+
+ vqs = kmalloc_array(ARRAY_SIZE(vsock->vqs), sizeof(*vqs), GFP_KERNEL);
+ if (!vqs) {
@ -498,6 +500,8 @@ index 0000000..8488d01
+ goto out;
+ }
+
+ atomic_set(&vsock->queued_replies, 0);
+
+ vqs[VSOCK_VQ_TX] = &vsock->vqs[VSOCK_VQ_TX];
+ vqs[VSOCK_VQ_RX] = &vsock->vqs[VSOCK_VQ_RX];
+ vsock->vqs[VSOCK_VQ_TX].handle_kick = vhost_vsock_handle_tx_kick;
@ -506,17 +510,17 @@ index 0000000..8488d01
+ vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs));
+
+ file->private_data = vsock;
+ init_waitqueue_head(&vsock->send_wait);
+ spin_lock_init(&vsock->send_pkt_list_lock);
+ INIT_LIST_HEAD(&vsock->send_pkt_list);
+ vhost_work_init(&vsock->send_pkt_work, vhost_transport_send_pkt_work);
+
+ mutex_lock(&vhost_vsock_mutex);
+ spin_lock_bh(&vhost_vsock_lock);
+ list_add_tail(&vsock->list, &vhost_vsock_list);
+ mutex_unlock(&vhost_vsock_mutex);
+ spin_unlock_bh(&vhost_vsock_lock);
+ return 0;
+
+out:
+ kfree(vsock);
+ vhost_vsock_free(vsock);
+ return ret;
+}
+
@ -534,22 +538,27 @@ index 0000000..8488d01
+{
+ struct vsock_sock *vsk = vsock_sk(sk);
+
+ lock_sock(sk);
+ /* vmci_transport.c doesn't take sk_lock here either. At least we're
+ * under vsock_table_lock so the sock cannot disappear while we're
+ * executing.
+ */
+
+ if (!vhost_vsock_get(vsk->local_addr.svm_cid)) {
+ sock_set_flag(sk, SOCK_DONE);
+ vsk->peer_shutdown = SHUTDOWN_MASK;
+ sk->sk_state = SS_UNCONNECTED;
+ sk->sk_err = ECONNRESET;
+ sk->sk_error_report(sk);
+ }
+ release_sock(sk);
+}
+
+static int vhost_vsock_dev_release(struct inode *inode, struct file *file)
+{
+ struct vhost_vsock *vsock = file->private_data;
+
+ mutex_lock(&vhost_vsock_mutex);
+ spin_lock_bh(&vhost_vsock_lock);
+ list_del(&vsock->list);
+ mutex_unlock(&vhost_vsock_mutex);
+ spin_unlock_bh(&vhost_vsock_lock);
+
+ /* Iterating over all connections for all CIDs to find orphans is
+ * inefficient. Room for improvement here. */
@ -558,18 +567,35 @@ index 0000000..8488d01
+ vhost_vsock_stop(vsock);
+ vhost_vsock_flush(vsock);
+ vhost_dev_stop(&vsock->dev);
+
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ while (!list_empty(&vsock->send_pkt_list)) {
+ struct virtio_vsock_pkt *pkt;
+
+ pkt = list_first_entry(&vsock->send_pkt_list,
+ struct virtio_vsock_pkt, list);
+ list_del_init(&pkt->list);
+ virtio_transport_free_pkt(pkt);
+ }
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ vhost_dev_cleanup(&vsock->dev, false);
+ kfree(vsock->dev.vqs);
+ kfree(vsock);
+ vhost_vsock_free(vsock);
+ return 0;
+}
+
+static int vhost_vsock_set_cid(struct vhost_vsock *vsock, u32 guest_cid)
+static int vhost_vsock_set_cid(struct vhost_vsock *vsock, u64 guest_cid)
+{
+ struct vhost_vsock *other;
+
+ /* Refuse reserved CIDs */
+ if (guest_cid <= VMADDR_CID_HOST)
+ if (guest_cid <= VMADDR_CID_HOST ||
+ guest_cid == U32_MAX)
+ return -EINVAL;
+
+ /* 64-bit CIDs are not yet supported */
+ if (guest_cid > U32_MAX)
+ return -EINVAL;
+
+ /* Refuse if CID is already in use */
@ -577,9 +603,9 @@ index 0000000..8488d01
+ if (other && other != vsock)
+ return -EADDRINUSE;
+
+ mutex_lock(&vhost_vsock_mutex);
+ spin_lock_bh(&vhost_vsock_lock);
+ vsock->guest_cid = guest_cid;
+ mutex_unlock(&vhost_vsock_mutex);
+ spin_unlock_bh(&vhost_vsock_lock);
+
+ return 0;
+}
@ -614,26 +640,30 @@ index 0000000..8488d01
+{
+ struct vhost_vsock *vsock = f->private_data;
+ void __user *argp = (void __user *)arg;
+ u64 __user *featurep = argp;
+ u32 __user *cidp = argp;
+ u32 guest_cid;
+ u64 guest_cid;
+ u64 features;
+ int start;
+ int r;
+
+ switch (ioctl) {
+ case VHOST_VSOCK_SET_GUEST_CID:
+ if (get_user(guest_cid, cidp))
+ if (copy_from_user(&guest_cid, argp, sizeof(guest_cid)))
+ return -EFAULT;
+ return vhost_vsock_set_cid(vsock, guest_cid);
+ case VHOST_VSOCK_START:
+ return vhost_vsock_start(vsock);
+ case VHOST_VSOCK_SET_RUNNING:
+ if (copy_from_user(&start, argp, sizeof(start)))
+ return -EFAULT;
+ if (start)
+ return vhost_vsock_start(vsock);
+ else
+ return vhost_vsock_stop(vsock);
+ case VHOST_GET_FEATURES:
+ features = VHOST_VSOCK_FEATURES;
+ if (copy_to_user(featurep, &features, sizeof(features)))
+ if (copy_to_user(argp, &features, sizeof(features)))
+ return -EFAULT;
+ return 0;
+ case VHOST_SET_FEATURES:
+ if (copy_from_user(&features, featurep, sizeof(features)))
+ if (copy_from_user(&features, argp, sizeof(features)))
+ return -EFAULT;
+ return vhost_vsock_set_features(vsock, features);
+ default:
@ -705,7 +735,6 @@ index 0000000..8488d01
+ },
+
+ .send_pkt = vhost_transport_send_pkt,
+ .send_pkt_no_sock = vhost_transport_send_pkt_no_sock,
+};
+
+static int __init vhost_vsock_init(void)
@ -729,17 +758,20 @@ index 0000000..8488d01
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Asias He");
+MODULE_DESCRIPTION("vhost transport for vsock ");
diff --git a/drivers/vhost/vsock.h b/drivers/vhost/vsock.h
new file mode 100644
index 0000000..173f9fc
--- /dev/null
+++ b/drivers/vhost/vsock.h
@@ -0,0 +1,5 @@
+#ifndef VHOST_VSOCK_H
+#define VHOST_VSOCK_H
+#define VHOST_VSOCK_SET_GUEST_CID _IOW(VHOST_VIRTIO, 0x60, __u32)
+#define VHOST_VSOCK_START _IO(VHOST_VIRTIO, 0x61)
+#endif
diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
index ab373191..b306476 100644
--- a/include/uapi/linux/vhost.h
+++ b/include/uapi/linux/vhost.h
@@ -169,4 +169,9 @@ struct vhost_scsi_target {
#define VHOST_SCSI_SET_EVENTS_MISSED _IOW(VHOST_VIRTIO, 0x43, __u32)
#define VHOST_SCSI_GET_EVENTS_MISSED _IOW(VHOST_VIRTIO, 0x44, __u32)
+/* VHOST_VSOCK specific defines */
+
+#define VHOST_VSOCK_SET_GUEST_CID _IOW(VHOST_VIRTIO, 0x60, __u64)
+#define VHOST_VSOCK_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)
+
#endif
--
2.9.0
2.9.3

View File

@ -1,24 +1,26 @@
From 17871c8224feaa5cf4944f3f09800968a8f19589 Mon Sep 17 00:00:00 2001
From 5600cbdcb09f2a322c696e37570d031932a3baa4 Mon Sep 17 00:00:00 2001
From: Asias He <asias@redhat.com>
Date: Thu, 13 Jun 2013 18:30:19 +0800
Subject: [PATCH 08/40] VSOCK: Add Makefile and Kconfig
Date: Thu, 28 Jul 2016 15:36:35 +0100
Subject: [PATCH 10/46] VSOCK: Add Makefile and Kconfig
Enable virtio-vsock and vhost-vsock.
Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 304ba62fd4e670c1a5784585da0fac9f7309ef6c)
---
drivers/vhost/Kconfig | 15 +++++++++++++++
drivers/vhost/Kconfig | 14 ++++++++++++++
drivers/vhost/Makefile | 4 ++++
net/vmw_vsock/Kconfig | 19 +++++++++++++++++++
net/vmw_vsock/Makefile | 2 ++
4 files changed, 40 insertions(+)
net/vmw_vsock/Kconfig | 20 ++++++++++++++++++++
net/vmw_vsock/Makefile | 6 ++++++
4 files changed, 44 insertions(+)
diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
index 533eaf0..d7aae9e 100644
index 533eaf0..2b5f588 100644
--- a/drivers/vhost/Kconfig
+++ b/drivers/vhost/Kconfig
@@ -21,6 +21,21 @@ config VHOST_SCSI
@@ -21,6 +21,20 @@ config VHOST_SCSI
Say M here to enable the vhost_scsi TCM fabric module
for use with virtio-scsi guests
@ -27,7 +29,6 @@ index 533eaf0..d7aae9e 100644
+ depends on VSOCKETS && EVENTFD
+ select VIRTIO_VSOCKETS_COMMON
+ select VHOST
+ select VHOST_RING
+ default n
+ ---help---
+ This kernel module can be loaded in the host kernel to provide AF_VSOCK
@ -55,10 +56,10 @@ index e0441c3..6b012b9 100644
+
obj-$(CONFIG_VHOST) += vhost.o
diff --git a/net/vmw_vsock/Kconfig b/net/vmw_vsock/Kconfig
index 14810ab..f27e74b 100644
index 14810ab..8831e7c 100644
--- a/net/vmw_vsock/Kconfig
+++ b/net/vmw_vsock/Kconfig
@@ -26,3 +26,22 @@ config VMWARE_VMCI_VSOCKETS
@@ -26,3 +26,23 @@ config VMWARE_VMCI_VSOCKETS
To compile this driver as a module, choose M here: the module
will be called vmw_vsock_vmci_transport. If unsure, say N.
@ -73,26 +74,33 @@ index 14810ab..f27e74b 100644
+ Enable this transport if your Virtual Machine host supports Virtual
+ Sockets over virtio.
+
+ To compile this driver as a module, choose M here: the module
+ will be called virtio_vsock_transport. If unsure, say N.
+ To compile this driver as a module, choose M here: the module will be
+ called vmw_vsock_virtio_transport. If unsure, say N.
+
+config VIRTIO_VSOCKETS_COMMON
+ tristate
+ ---help---
+ This option is selected by any driver which needs to access
+ the virtio_vsock.
+ tristate
+ help
+ This option is selected by any driver which needs to access
+ the virtio_vsock. The module will be called
+ vmw_vsock_virtio_transport_common.
diff --git a/net/vmw_vsock/Makefile b/net/vmw_vsock/Makefile
index 2ce52d7..cf4c294 100644
index 2ce52d7..bc27c70 100644
--- a/net/vmw_vsock/Makefile
+++ b/net/vmw_vsock/Makefile
@@ -1,5 +1,7 @@
@@ -1,7 +1,13 @@
obj-$(CONFIG_VSOCKETS) += vsock.o
obj-$(CONFIG_VMWARE_VMCI_VSOCKETS) += vmw_vsock_vmci_transport.o
+obj-$(CONFIG_VIRTIO_VSOCKETS) += virtio_transport.o
+obj-$(CONFIG_VIRTIO_VSOCKETS_COMMON) += virtio_transport_common.o
+obj-$(CONFIG_VIRTIO_VSOCKETS) += vmw_vsock_virtio_transport.o
+obj-$(CONFIG_VIRTIO_VSOCKETS_COMMON) += vmw_vsock_virtio_transport_common.o
vsock-y += af_vsock.o vsock_addr.o
vmw_vsock_vmci_transport-y += vmci_transport.o vmci_transport_notify.o \
vmci_transport_notify_qstate.o
+
+vmw_vsock_virtio_transport-y += virtio_transport.o
+
+vmw_vsock_virtio_transport_common-y += virtio_transport_common.o
--
2.9.0
2.9.3

View File

@ -0,0 +1,33 @@
From d722f0d8acd78f91e079f2fe0b4e4a29b42435b7 Mon Sep 17 00:00:00 2001
From: Wei Yongjun <weiyj.lk@gmail.com>
Date: Tue, 2 Aug 2016 13:50:42 +0000
Subject: [PATCH 11/46] VSOCK: Use kvfree()
Use kvfree() instead of open-coding it.
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit b226acab2f6aaa45c2af27279b63f622b23a44bd)
---
drivers/vhost/vsock.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 028ca16..0ddf3a2 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -434,10 +434,7 @@ err:
static void vhost_vsock_free(struct vhost_vsock *vsock)
{
- if (is_vmalloc_addr(vsock))
- vfree(vsock);
- else
- kfree(vsock);
+ kvfree(vsock);
}
static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
--
2.9.3

View File

@ -0,0 +1,53 @@
From f2e2e9e439f717e454a1cf0e71e098b19ff0005d Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 4 Aug 2016 14:52:53 +0100
Subject: [PATCH 12/46] vhost/vsock: fix vhost virtio_vsock_pkt use-after-free
Stash the packet length in a local variable before handing over
ownership of the packet to virtio_transport_recv_pkt() or
virtio_transport_free_pkt().
This patch solves the use-after-free since pkt is no longer guaranteed
to be alive.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 3fda5d6e580193fa005014355b3a61498f1b3ae0)
---
drivers/vhost/vsock.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 0ddf3a2..e3b30ea 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -307,6 +307,8 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
vhost_disable_notify(&vsock->dev, vq);
for (;;) {
+ u32 len;
+
if (!vhost_vsock_more_replies(vsock)) {
/* Stop tx until the device processes already
* pending replies. Leave tx virtqueue
@@ -334,13 +336,15 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
continue;
}
+ len = pkt->len;
+
/* Only accept correctly addressed packets */
if (le64_to_cpu(pkt->hdr.src_cid) == vsock->guest_cid)
virtio_transport_recv_pkt(pkt);
else
virtio_transport_free_pkt(pkt);
- vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len);
+ vhost_add_used(vq, head, sizeof(pkt->hdr) + len);
added = true;
}
--
2.9.3

View File

@ -0,0 +1,28 @@
From 1a8274f5f22a79f10aae3b4cf35f9b8a65f7f423 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Fri, 5 Aug 2016 13:52:09 +0100
Subject: [PATCH 13/46] virtio-vsock: fix include guard typo
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 28ad55578b8a76390d966b09da8c7fa3644f5140)
---
include/uapi/linux/virtio_vsock.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/uapi/linux/virtio_vsock.h b/include/uapi/linux/virtio_vsock.h
index 6b011c1..1d57ed3 100644
--- a/include/uapi/linux/virtio_vsock.h
+++ b/include/uapi/linux/virtio_vsock.h
@@ -32,7 +32,7 @@
*/
#ifndef _UAPI_LINUX_VIRTIO_VSOCK_H
-#define _UAPI_LINUX_VIRTIO_VOSCK_H
+#define _UAPI_LINUX_VIRTIO_VSOCK_H
#include <linux/types.h>
#include <linux/virtio_ids.h>
--
2.9.3

View File

@ -0,0 +1,61 @@
From 5377a4a3e61adb924efdb4ff47ca2213c63e5bc4 Mon Sep 17 00:00:00 2001
From: Gerard Garcia <ggarcia@deic.uab.cat>
Date: Wed, 10 Aug 2016 17:24:34 +0200
Subject: [PATCH 14/46] vhost/vsock: drop space available check for TX vq
Remove unnecessary use of enable/disable callback notifications
and the incorrect more space available check.
The virtio_transport_tx_work handles when the TX virtqueue
has more buffers available.
Signed-off-by: Gerard Garcia <ggarcia@deic.uab.cat>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 21bc54fc0cdc31de72b57d2b3c79cf9c2b83cf39)
---
net/vmw_vsock/virtio_transport.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index 699dfab..936d7ee 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -87,9 +87,6 @@ virtio_transport_send_pkt_work(struct work_struct *work)
vq = vsock->vqs[VSOCK_VQ_TX];
- /* Avoid unnecessary interrupts while we're processing the ring */
- virtqueue_disable_cb(vq);
-
for (;;) {
struct virtio_vsock_pkt *pkt;
struct scatterlist hdr, buf, *sgs[2];
@@ -99,7 +96,6 @@ virtio_transport_send_pkt_work(struct work_struct *work)
spin_lock_bh(&vsock->send_pkt_list_lock);
if (list_empty(&vsock->send_pkt_list)) {
spin_unlock_bh(&vsock->send_pkt_list_lock);
- virtqueue_enable_cb(vq);
break;
}
@@ -118,13 +114,13 @@ virtio_transport_send_pkt_work(struct work_struct *work)
}
ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, pkt, GFP_KERNEL);
+ /* Usually this means that there is no more space available in
+ * the vq
+ */
if (ret < 0) {
spin_lock_bh(&vsock->send_pkt_list_lock);
list_add(&pkt->list, &vsock->send_pkt_list);
spin_unlock_bh(&vsock->send_pkt_list_lock);
-
- if (!virtqueue_enable_cb(vq) && ret == -ENOSPC)
- continue; /* retry now that we have more space */
break;
}
--
2.9.3

View File

@ -1,7 +1,7 @@
From a8a0423ba3b9cc33e2c673890d917318be602145 Mon Sep 17 00:00:00 2001
From fc2bc563ce2a5205b7504c39e8dbb0a5db2d63e9 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@docker.com>
Date: Mon, 4 Apr 2016 14:50:10 +0100
Subject: [PATCH 09/40] VSOCK: Only allow host network namespace to use
Subject: [PATCH 15/46] VSOCK: Only allow host network namespace to use
AF_VSOCK.
The VSOCK addressing schema does not really lend itself to simply creating an
@ -13,10 +13,10 @@ Signed-off-by: Ian Campbell <ian.campbell@docker.com>
1 file changed, 3 insertions(+)
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 15f9595..8373709 100644
index 17dbbe6..1bb1b01 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -1840,6 +1840,9 @@ static const struct proto_ops vsock_stream_ops = {
@@ -1852,6 +1852,9 @@ static const struct proto_ops vsock_stream_ops = {
static int vsock_create(struct net *net, struct socket *sock,
int protocol, int kern)
{
@ -27,5 +27,5 @@ index 15f9595..8373709 100644
return -EINVAL;
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From bab5ea0c103fe9e59b91e3f02ffd3a45c4e6be4a Mon Sep 17 00:00:00 2001
From 7c37cbd1fd8bff6d8373cff4d7e1c33fcb2aa653 Mon Sep 17 00:00:00 2001
From: Jake Oshins <jakeo@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:41 -0800
Subject: [PATCH 10/40] drivers:hv: Define the channel type for Hyper-V PCI
Subject: [PATCH 16/46] drivers:hv: Define the channel type for Hyper-V PCI
Express pass-through
This defines the channel type for PCI front-ends in Hyper-V VMs.
@ -59,5 +59,5 @@ index ae6a711..10dda1e 100644
*/
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 6a0ed33229365bc267788283730cde6ddc0c7ff8 Mon Sep 17 00:00:00 2001
From b26f3791593f6645c4e0e11fd93db7e47390fab6 Mon Sep 17 00:00:00 2001
From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:43 -0800
Subject: [PATCH 11/40] Drivers: hv: vmbus: Use uuid_le type consistently
Subject: [PATCH 17/46] Drivers: hv: vmbus: Use uuid_le type consistently
Consistently use uuid_le type in the Hyper-V driver code.
@ -30,10 +30,10 @@ index a77646b..38470aa 100644
perf_chn = true;
break;
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index f19b6f7..e64934e 100644
index 9b5440f..9aadcc2 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -531,7 +531,7 @@ static int vmbus_uevent(struct device *device, struct kobj_uevent_env *env)
@@ -532,7 +532,7 @@ static int vmbus_uevent(struct device *device, struct kobj_uevent_env *env)
static const uuid_le null_guid;
@ -42,7 +42,7 @@ index f19b6f7..e64934e 100644
{
if (memcmp(guid, &null_guid, sizeof(uuid_le)))
return false;
@@ -544,9 +544,9 @@ static inline bool is_null_guid(const __u8 *guid)
@@ -545,9 +545,9 @@ static inline bool is_null_guid(const __u8 *guid)
*/
static const struct hv_vmbus_device_id *hv_vmbus_get_id(
const struct hv_vmbus_device_id *id,
@ -54,7 +54,7 @@ index f19b6f7..e64934e 100644
if (!memcmp(&id->guid, guid, sizeof(uuid_le)))
return id;
@@ -563,7 +563,7 @@ static int vmbus_match(struct device *device, struct device_driver *driver)
@@ -564,7 +564,7 @@ static int vmbus_match(struct device *device, struct device_driver *driver)
struct hv_driver *drv = drv_to_hv_drv(driver);
struct hv_device *hv_dev = device_to_hv_device(device);
@ -63,7 +63,7 @@ index f19b6f7..e64934e 100644
return 1;
return 0;
@@ -580,7 +580,7 @@ static int vmbus_probe(struct device *child_device)
@@ -581,7 +581,7 @@ static int vmbus_probe(struct device *child_device)
struct hv_device *dev = device_to_hv_device(child_device);
const struct hv_vmbus_device_id *dev_id;
@ -280,7 +280,7 @@ index 64f36e0..6e4c645 100644
};
diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
index 5b96206..8adca44 100644
index 9f5cdd4..8e8c69b 100644
--- a/scripts/mod/file2alias.c
+++ b/scripts/mod/file2alias.c
@@ -917,7 +917,7 @@ static int do_vmbus_entry(const char *filename, void *symval,
@ -293,5 +293,5 @@ index 5b96206..8adca44 100644
strcpy(alias, "vmbus:");
strcat(alias, guid_name);
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 2be59f6d9924239fa410a184d06fc7d0338c83fc Mon Sep 17 00:00:00 2001
From 9ae0af317c2a085434b46197a663165895decaf6 Mon Sep 17 00:00:00 2001
From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:44 -0800
Subject: [PATCH 12/40] Drivers: hv: vmbus: Use uuid_le_cmp() for comparing
Subject: [PATCH 18/46] Drivers: hv: vmbus: Use uuid_le_cmp() for comparing
GUIDs
Use uuid_le_cmp() for comparing GUIDs.
@ -29,10 +29,10 @@ index 38470aa..dc4fb0b 100644
break;
}
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index e64934e..aa4d8cc 100644
index 9aadcc2..bf54455 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -533,7 +533,7 @@ static const uuid_le null_guid;
@@ -534,7 +534,7 @@ static const uuid_le null_guid;
static inline bool is_null_guid(const uuid_le *guid)
{
@ -41,7 +41,7 @@ index e64934e..aa4d8cc 100644
return false;
return true;
}
@@ -547,7 +547,7 @@ static const struct hv_vmbus_device_id *hv_vmbus_get_id(
@@ -548,7 +548,7 @@ static const struct hv_vmbus_device_id *hv_vmbus_get_id(
const uuid_le *guid)
{
for (; !is_null_guid(&id->guid); id++)
@ -51,5 +51,5 @@ index e64934e..aa4d8cc 100644
return NULL;
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 5111097f2add634ea6b2964841ad15d9149da96e Mon Sep 17 00:00:00 2001
From 5c3a0d077f4c0ecd17117c04b0b6fef7e8acbdea Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:47 -0800
Subject: [PATCH 13/40] Drivers: hv: vmbus: serialize process_chn_event() and
Subject: [PATCH 19/46] Drivers: hv: vmbus: serialize process_chn_event() and
vmbus_close_internal()
process_chn_event(), running in the tasklet, can race with
@ -83,5 +83,5 @@ index 9098f13..6a90c69 100644
}
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From df06857f7adf8fb4bb6cacdc80e174c9cb20ee8a Mon Sep 17 00:00:00 2001
From 8bba0d5b705be1f1d0bdfe7fa8465042fa936c3c Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:48 -0800
Subject: [PATCH 14/40] Drivers: hv: vmbus: do sanity check of channel state in
Subject: [PATCH 20/46] Drivers: hv: vmbus: do sanity check of channel state in
vmbus_close_internal()
This fixes an incorrect assumption of channel state in the function.
@ -38,5 +38,5 @@ index 6a90c69..b3c14ca 100644
channel->sc_creation_callback = NULL;
/* Stop callback and cancel the timer asap */
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 8f7f448b5f9b613f9dec4b4554062c6f709dce96 Mon Sep 17 00:00:00 2001
From 4185d167c4c6db493a3e96e59689e32562761563 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:49 -0800
Subject: [PATCH 15/40] Drivers: hv: vmbus: fix rescind-offer handling for
Subject: [PATCH 21/46] Drivers: hv: vmbus: fix rescind-offer handling for
device without a driver
In the path vmbus_onoffer_rescind() -> vmbus_device_unregister() ->
@ -79,10 +79,10 @@ index dc4fb0b..7903acc 100644
vmbus_device_unregister(channel->device_obj);
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index aa4d8cc..5a71b2a 100644
index bf54455..8bf1f31 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -602,23 +602,11 @@ static int vmbus_remove(struct device *child_device)
@@ -603,23 +603,11 @@ static int vmbus_remove(struct device *child_device)
{
struct hv_driver *drv;
struct hv_device *dev = device_to_hv_device(child_device);
@ -106,7 +106,7 @@ index aa4d8cc..5a71b2a 100644
}
return 0;
@@ -653,7 +641,10 @@ static void vmbus_shutdown(struct device *child_device)
@@ -654,7 +642,10 @@ static void vmbus_shutdown(struct device *child_device)
static void vmbus_device_release(struct device *device)
{
struct hv_device *hv_dev = device_to_hv_device(device);
@ -118,5 +118,5 @@ index aa4d8cc..5a71b2a 100644
}
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From bf5d3511a1f0b147e32eec71f610e780fd11701a Mon Sep 17 00:00:00 2001
From 667defbc25ef5dcfa89b60c0bd534d5a27f8c116 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:50 -0800
Subject: [PATCH 16/40] Drivers: hv: vmbus: release relid on error in
Subject: [PATCH 22/46] Drivers: hv: vmbus: release relid on error in
vmbus_process_offer()
We want to simplify vmbus_onoffer_rescind() by not invoking
@ -70,5 +70,5 @@ index 7903acc..9c9da3a 100644
}
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From dfc150e7bb90be0591784b1f8a532670ee1c011a Mon Sep 17 00:00:00 2001
From 235e2935d2d0700ac21db0cb0d6a64d6f9ff09fa Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:51 -0800
Subject: [PATCH 17/40] Drivers: hv: vmbus: channge
Subject: [PATCH 23/46] Drivers: hv: vmbus: channge
vmbus_connection.channel_lock to mutex
spinlock is unnecessary here.
@ -112,5 +112,5 @@ index 3782636..d9937be 100644
struct workqueue_struct *work_queue;
};
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From c1ebc6e51a4e579e5b20ba4fd517e0da7dc563d1 Mon Sep 17 00:00:00 2001
From d6cc8615d22926bb1e2d8ba85fd391df9f1cf089 Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Mon, 14 Dec 2015 19:02:00 -0800
Subject: [PATCH 18/40] Drivers: hv: remove code duplication between
Subject: [PATCH 24/46] Drivers: hv: remove code duplication between
vmbus_recvpacket()/vmbus_recvpacket_raw()
vmbus_recvpacket() and vmbus_recvpacket_raw() are almost identical but
@ -122,5 +122,5 @@ index 2889d97..dd6de7f 100644
}
EXPORT_SYMBOL_GPL(vmbus_recvpacket_raw);
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From cc4a6558e39bfc3042357669ff74d13f55e4f856 Mon Sep 17 00:00:00 2001
From 64c112c50e6a404431f5adc7685915cd9b5e8d42 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 21 Dec 2015 12:21:22 -0800
Subject: [PATCH 19/40] Drivers: hv: vmbus: fix the building warning with
Subject: [PATCH 25/46] Drivers: hv: vmbus: fix the building warning with
hyperv-keyboard
With the recent change af3ff643ea91ba64dd8d0b1cbed54d44512f96cd
@ -68,5 +68,5 @@ index 4712d7d..9e2de6a 100644
*/
#define HV_VSS_GUID \
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From e9725d58380af9a2549b7fc76f512fc8bddea14b Mon Sep 17 00:00:00 2001
From 6ee99ea70b9975b03c436c05a55c154bed392a94 Mon Sep 17 00:00:00 2001
From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Tue, 15 Dec 2015 16:27:27 -0800
Subject: [PATCH 20/40] Drivers: hv: vmbus: Treat Fibre Channel devices as
Subject: [PATCH 26/46] Drivers: hv: vmbus: Treat Fibre Channel devices as
performance critical
For performance critical devices, we distribute the incoming
@ -38,5 +38,5 @@ index d013171..1c1ad47 100644
{ HV_NIC_GUID, },
/* NetworkDirect Guest RDMA */
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 396bc263df13d54d2468c5d2a7888fc9f4067894 Mon Sep 17 00:00:00 2001
From 5131dc31aecd376785b71bb3bb16bf70573682b3 Mon Sep 17 00:00:00 2001
From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Fri, 25 Dec 2015 20:00:30 -0800
Subject: [PATCH 21/40] Drivers: hv: vmbus: Add vendor and device atttributes
Subject: [PATCH 27/46] Drivers: hv: vmbus: Add vendor and device atttributes
Add vendor and device attributes to VMBUS devices. These will be used
by Hyper-V tools as well user-level RDMA libraries that will use the
@ -259,10 +259,10 @@ index 1c1ad47..107d72f 100644
(vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) {
/*
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 5a71b2a..3b83dfe 100644
index 8bf1f31..959b656 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -478,6 +478,24 @@ static ssize_t channel_vp_mapping_show(struct device *dev,
@@ -479,6 +479,24 @@ static ssize_t channel_vp_mapping_show(struct device *dev,
}
static DEVICE_ATTR_RO(channel_vp_mapping);
@ -287,7 +287,7 @@ index 5a71b2a..3b83dfe 100644
/* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */
static struct attribute *vmbus_attrs[] = {
&dev_attr_id.attr,
@@ -503,6 +521,8 @@ static struct attribute *vmbus_attrs[] = {
@@ -504,6 +522,8 @@ static struct attribute *vmbus_attrs[] = {
&dev_attr_in_read_bytes_avail.attr,
&dev_attr_in_write_bytes_avail.attr,
&dev_attr_channel_vp_mapping.attr,
@ -296,7 +296,7 @@ index 5a71b2a..3b83dfe 100644
NULL,
};
ATTRIBUTE_GROUPS(vmbus);
@@ -957,6 +977,7 @@ struct hv_device *vmbus_device_create(const uuid_le *type,
@@ -960,6 +980,7 @@ struct hv_device *vmbus_device_create(const uuid_le *type,
memcpy(&child_device_obj->dev_type, type, sizeof(uuid_le));
memcpy(&child_device_obj->dev_instance, instance,
sizeof(uuid_le));
@ -351,5 +351,5 @@ index 9e2de6a..51c98fd 100644
struct device device;
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 864ad2a0988143fecd3b3c9282777a6930b34d5e Mon Sep 17 00:00:00 2001
From 2dd3d37c7e1a4ce1adeb2a4af3df878907d44ebe Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Wed, 27 Jan 2016 22:29:34 -0800
Subject: [PATCH 22/40] Drivers: hv: vmbus: avoid infinite loop in
Subject: [PATCH 28/46] Drivers: hv: vmbus: avoid infinite loop in
init_vp_index()
When we pick a CPU to use for a new subchannel we try find a non-used one
@ -45,5 +45,5 @@ index 107d72f..af1d82e 100644
cur_cpu = cpumask_next(cur_cpu, &available_mask);
if (cur_cpu >= nr_cpu_ids) {
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From fa2a38837ff00acfa5a1cb8fedbff33260f36653 Mon Sep 17 00:00:00 2001
From f478bef9bcd76c17c338c7c96422f22dad0d02e3 Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Wed, 27 Jan 2016 22:29:35 -0800
Subject: [PATCH 23/40] Drivers: hv: vmbus: avoid scheduling in interrupt
Subject: [PATCH 29/46] Drivers: hv: vmbus: avoid scheduling in interrupt
context in vmbus_initiate_unload()
We have to call vmbus_initiate_unload() on crash to make kdump work but
@ -95,5 +95,5 @@ index af1d82e..d6c6114 100644
/*
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From d24827511ecaab87a0763e43a13e7c742338ca90 Mon Sep 17 00:00:00 2001
From 1686f700609459bc6cf9d4597a8a39bf5f133409 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:37 -0800
Subject: [PATCH 24/40] Drivers: hv: vmbus: add a helper function to set a
Subject: [PATCH 30/46] Drivers: hv: vmbus: add a helper function to set a
channel's pending send size
This will be used by the coming net/hvsock driver.
@ -32,5 +32,5 @@ index 51c98fd..934542a 100644
int vmbus_request_offers(void);
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 81caa506ed7b4af65b15c7f79e2cd82ba9565d8e Mon Sep 17 00:00:00 2001
From f22a67d17cb1f357da61258ee181729bcf0af3b1 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:38 -0800
Subject: [PATCH 25/40] Drivers: hv: vmbus: define the new offer type for
Subject: [PATCH 31/46] Drivers: hv: vmbus: define the new offer type for
Hyper-V socket (hvsock)
A helper function is also added.
@ -40,5 +40,5 @@ index 934542a..a4f105d 100644
enum hv_signal_policy policy)
{
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 460356566f2e7d6a3b8f70154d0208620ee93063 Mon Sep 17 00:00:00 2001
From 6d465cc94028d9c3b7e36996b09e7c9d95b269f3 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:39 -0800
Subject: [PATCH 26/40] Drivers: hv: vmbus: vmbus_sendpacket_ctl: hvsock: avoid
Subject: [PATCH 32/46] Drivers: hv: vmbus: vmbus_sendpacket_ctl: hvsock: avoid
unnecessary signaling
When the hvsock channel's outbound ringbuffer is full (i.e.,
@ -41,5 +41,5 @@ index dd6de7f..128dcf2 100644
return ret;
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 7b1854b482f0a1e22e3ff5dfc5111ae968354b55 Mon Sep 17 00:00:00 2001
From 7a337a1dcbdcb4c5ab22822f5cbf5b5fbe45551c Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:40 -0800
Subject: [PATCH 27/40] Drivers: hv: vmbus: define a new VMBus message type for
Subject: [PATCH 33/46] Drivers: hv: vmbus: define a new VMBus message type for
hvsock
A function to send the type of message is also added.
@ -97,5 +97,5 @@ index a4f105d..191bc5d 100644
+ const uuid_le *shv_host_servie_id);
#endif /* _HYPERV_H */
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 80bf79d36dbffe682ccc62c8a4210e3ba4a6ecd3 Mon Sep 17 00:00:00 2001
From 6a067b8e07452ec14629bea5bfe0951877c27451 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:41 -0800
Subject: [PATCH 28/40] Drivers: hv: vmbus: add a hvsock flag in struct
Subject: [PATCH 34/46] Drivers: hv: vmbus: add a hvsock flag in struct
hv_driver
Only the coming hv_sock driver has a "true" value for this flag.
@ -20,10 +20,10 @@ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 files changed, 18 insertions(+)
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 3b83dfe..d76a65f 100644
index 959b656..d46b4ff 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -583,6 +583,10 @@ static int vmbus_match(struct device *device, struct device_driver *driver)
@@ -584,6 +584,10 @@ static int vmbus_match(struct device *device, struct device_driver *driver)
struct hv_driver *drv = drv_to_hv_drv(driver);
struct hv_device *hv_dev = device_to_hv_device(device);
@ -60,5 +60,5 @@ index 191bc5d..05966e2 100644
uuid_le dev_type;
const struct hv_vmbus_device_id *id_table;
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 82e699723be65d45ae64999eb60bde5cbb016368 Mon Sep 17 00:00:00 2001
From e8bf64d13b450b3a224bd12779c38931e4a5691d Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:42 -0800
Subject: [PATCH 29/40] Drivers: hv: vmbus: add a per-channel rescind callback
Subject: [PATCH 35/46] Drivers: hv: vmbus: add a per-channel rescind callback
This will be used by the coming hv_sock driver.
@ -68,5 +68,5 @@ index 05966e2..ad04017 100644
* Retrieve the (sub) channel on which to send an outgoing request.
* When a primary channel has multiple sub-channels, we choose a
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From d29f80db881010c9eb7f7f09e33adcb2ec815f58 Mon Sep 17 00:00:00 2001
From b27d9192ab78946005526ee9574bc971b69205a2 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:43 -0800
Subject: [PATCH 30/40] Drivers: hv: vmbus: add an API
Subject: [PATCH 36/46] Drivers: hv: vmbus: add an API
vmbus_hvsock_device_unregister()
The hvsock driver needs this API to release all the resources related
@ -149,5 +149,5 @@ index ad04017..993318a 100644
resource_size_t min, resource_size_t max,
resource_size_t size, resource_size_t align,
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 73a45a3f46df47b046fea2fa656602a3422ba54a Mon Sep 17 00:00:00 2001
From d5f89659d3c0c5e05c3a76c3cc1b84aa505bc06e Mon Sep 17 00:00:00 2001
From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:45 -0800
Subject: [PATCH 31/40] Drivers: hv: vmbus: Give control over how the ring
Subject: [PATCH 37/46] Drivers: hv: vmbus: Give control over how the ring
access is serialized
On the channel send side, many of the VMBUS
@ -204,5 +204,5 @@ index 993318a..6c9695e 100644
{
return !!(c->offermsg.offer.chn_flags &
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From aa5c8776d3db2e7f09804631aec6d8e7b7265332 Mon Sep 17 00:00:00 2001
From f565fcc8decc0d57427b8506ea4b430de70440b1 Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Fri, 26 Feb 2016 15:13:16 -0800
Subject: [PATCH 32/40] Drivers: hv: vmbus: avoid wait_for_completion() on
Subject: [PATCH 38/46] Drivers: hv: vmbus: avoid wait_for_completion() on
crash
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
@ -74,10 +74,10 @@ index b925fa3..10efab0 100644
static inline void hv_poll_channel(struct vmbus_channel *channel,
void (*cb)(void *))
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index d76a65f..45ea71e 100644
index d46b4ff..f5f57ee 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -1263,7 +1263,7 @@ static void hv_kexec_handler(void)
@@ -1266,7 +1266,7 @@ static void hv_kexec_handler(void)
int cpu;
hv_synic_clockevents_cleanup();
@ -86,7 +86,7 @@ index d76a65f..45ea71e 100644
for_each_online_cpu(cpu)
smp_call_function_single(cpu, hv_synic_cleanup, NULL, 1);
hv_cleanup();
@@ -1271,7 +1271,7 @@ static void hv_kexec_handler(void)
@@ -1274,7 +1274,7 @@ static void hv_kexec_handler(void)
static void hv_crash_handler(struct pt_regs *regs)
{
@ -96,5 +96,5 @@ index d76a65f..45ea71e 100644
* In crash handler we can't schedule synic cleanup for all CPUs,
* doing the cleanup for current CPU only. This should be sufficient
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From f6eda6e9caee4b2eac5c5ad7a63002b7691f2102 Mon Sep 17 00:00:00 2001
From 0b4365983ef397e8b43f9f77e591c4c9f83fca26 Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Fri, 26 Feb 2016 15:13:18 -0800
Subject: [PATCH 33/40] Drivers: hv: vmbus: avoid unneeded compiler
Subject: [PATCH 39/46] Drivers: hv: vmbus: avoid unneeded compiler
optimizations in vmbus_wait_for_unload()
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
@ -35,5 +35,5 @@ index f70e352..c892db5 100644
continue;
}
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From f377090a6256e4bbe8bc26ee524a03ea198fb402 Mon Sep 17 00:00:00 2001
From f3d84d9ee57ed72603ea8334302c2ed2971882b9 Mon Sep 17 00:00:00 2001
From: Tom Herbert <tom@herbertland.com>
Date: Mon, 7 Mar 2016 14:11:06 -0800
Subject: [PATCH 34/40] kcm: Kernel Connection Multiplexor module
Subject: [PATCH 40/46] kcm: Kernel Connection Multiplexor module
This module implements the Kernel Connection Multiplexor.
@ -2308,5 +2308,5 @@ index 0000000..649d246
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_NETPROTO(PF_KCM);
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 1aaf078c41b6150c42a40ca165003c5b902ba541 Mon Sep 17 00:00:00 2001
From d436f250cb94cdc0f8ceb18c73e641f8285f2c87 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 21 Mar 2016 02:51:09 -0700
Subject: [PATCH 35/40] net: add the AF_KCM entries to family name tables
Subject: [PATCH 41/46] net: add the AF_KCM entries to family name tables
This is for the recent kcm driver, which introduces AF_KCM(41) in
b7ac4eb(kcm: Kernel Connection Multiplexor module).
@ -48,5 +48,5 @@ index 0d91f7d..925def4 100644
/*
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 46cb53195e3a65ab11c6447b855ef9684f29812b Mon Sep 17 00:00:00 2001
From 550437ba0f633b470b719d981110cbd38a4a83c4 Mon Sep 17 00:00:00 2001
From: Courtney Cavin <courtney.cavin@sonymobile.com>
Date: Wed, 27 Apr 2016 12:13:03 -0700
Subject: [PATCH 36/40] net: Add Qualcomm IPC router
Subject: [PATCH 42/46] net: Add Qualcomm IPC router
Add an implementation of Qualcomm's IPC router protocol, used to
communicate with service providing remote processors.
@ -1303,5 +1303,5 @@ index 0000000..84ebce7
+MODULE_DESCRIPTION("Qualcomm IPC-Router SMD interface driver");
+MODULE_LICENSE("GPL v2");
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 04b3d63b757432d90ce323a748b85a866114d9ac Mon Sep 17 00:00:00 2001
From 87e2463282f0dc46f866cce2efd5cb36eb964bdb Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Sun, 15 May 2016 09:53:11 -0700
Subject: [PATCH 37/40] hv_sock: introduce Hyper-V Sockets
Subject: [PATCH 43/46] hv_sock: introduce Hyper-V Sockets
Hyper-V Sockets (hv_sock) supplies a byte-stream based communication
mechanism between the host and the guest. It's somewhat like TCP over
@ -41,10 +41,10 @@ Origin: https://patchwork.ozlabs.org/patch/622404/
create mode 100644 net/hv_sock/af_hvsock.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 12d49f5..fa87bdd 100644
index fa94182..ff17e76 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5123,7 +5123,9 @@ F: drivers/input/serio/hyperv-keyboard.c
@@ -5136,7 +5136,9 @@ F: drivers/input/serio/hyperv-keyboard.c
F: drivers/net/hyperv/
F: drivers/scsi/storvsc_drv.c
F: drivers/video/fbdev/hyperv_fb.c
@ -1801,5 +1801,5 @@ index 0000000..b91bd60
+MODULE_DESCRIPTION("Hyper-V Sockets");
+MODULE_LICENSE("Dual BSD/GPL");
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 179a4a8d5ad76cab98d2bc4b0f2633898d59e9f8 Mon Sep 17 00:00:00 2001
From d3aac2768b413cea5e281290dfe236d7531638ad Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 21 Mar 2016 02:53:08 -0700
Subject: [PATCH 38/40] net: add the AF_HYPERV entries to family name tables
Subject: [PATCH 44/46] net: add the AF_HYPERV entries to family name tables
This is for the hv_sock driver, which introduces AF_HYPERV(42).
@ -45,5 +45,5 @@ index 925def4..323f7a3 100644
/*
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From 6857a15b84ea9a592cfbe8a9064d77a63d49bc68 Mon Sep 17 00:00:00 2001
From 56e767526878b2fc79b0dfaa2860ed2ae836a52e Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com>
Date: Sat, 21 May 2016 16:55:50 +0800
Subject: [PATCH 39/40] Drivers: hv: vmbus: fix the race when querying &
Subject: [PATCH 45/46] Drivers: hv: vmbus: fix the race when querying &
updating the percpu list
There is a rare race when we remove an entry from the global list
@ -129,5 +129,5 @@ index c892db5..0a54317 100644
err_free_chan:
free_channel(newchannel);
--
2.9.0
2.9.3

View File

@ -1,7 +1,7 @@
From e3e9e646e4cd69f7dec7f5acc4a30cb1beb6e95a Mon Sep 17 00:00:00 2001
From 44694d7a14f502c9222ea100ab62fa7030acf548 Mon Sep 17 00:00:00 2001
From: Rolf Neugebauer <rolf.neugebauer@gmail.com>
Date: Mon, 23 May 2016 18:55:45 +0100
Subject: [PATCH 40/40] vmbus: Don't spam the logs with unknown GUIDs
Subject: [PATCH 46/46] vmbus: Don't spam the logs with unknown GUIDs
With Hyper-V sockets device types are introduced on the fly. The pr_info()
then prints a message on every connection, which is way too verbose. Since
@ -26,5 +26,5 @@ index 0a54317..120ee22 100644
}
--
2.9.0
2.9.3