kernel: Update vsock patches to RFC v6

Series is at <1469716595-13591-1-git-send-email-stefanha@redhat.com>.

This corresponds to v7 of the spec, posted in
<1470324277-19300-1-git-send-email-stefanha@redhat.com>

Also add a "cherry-picked from" to the "vsock: make listener child lock
ordering explicit" patch and move it to the head of the series with the other
vsock backports.

Finally backport three new upstream fixes:
3fda5d6e5801 vhost/vsock: fix vhost virtio_vsock_pkt use-after-free
28ad55578b8a virtio-vsock: fix include guard typo
21bc54fc0cdc vhost/vsock: drop space available check for TX vq

These were made on top of the version of the vsock patches which were added to
Linux master in v4.8-rc1. This commit is based on the email posting, will
replace with with proper cherry-pick separately.

Requires corresponding backend changes in Hyperkit

Signed-off-by: Ian Campbell <ian.campbell@docker.com>
This commit is contained in:
Ian Campbell
2016-08-11 13:14:16 +01:00
parent 8a03f15446
commit c41f680f7d
45 changed files with 1028 additions and 550 deletions

View File

@@ -1,7 +1,7 @@
From 4c251c111a65c8eef8e4dcf0b7326ef7761f6ab9 Mon Sep 17 00:00:00 2001 From 0d67af6648f600656eb20cb2ca1d35cb0985e9bd Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com> From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 17 Dec 2015 16:53:43 +0800 Date: Thu, 17 Dec 2015 16:53:43 +0800
Subject: [PATCH 01/40] virtio: make find_vqs() checkpatch.pl-friendly Subject: [PATCH 01/45] virtio: make find_vqs() checkpatch.pl-friendly
checkpatch.pl wants arrays of strings declared as follows: checkpatch.pl wants arrays of strings declared as follows:
@@ -115,10 +115,10 @@ index 1b83159..bf2d130 100644
struct virtio_ccw_device *vcdev = to_vc_device(vdev); struct virtio_ccw_device *vcdev = to_vc_device(vdev);
unsigned long *indicatorp = NULL; unsigned long *indicatorp = NULL;
diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 7d3e5d0..0c3691f 100644 index 56f7e25..66082c9 100644
--- a/drivers/virtio/virtio_balloon.c --- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c
@@ -388,7 +388,7 @@ static int init_vqs(struct virtio_balloon *vb) @@ -394,7 +394,7 @@ static int init_vqs(struct virtio_balloon *vb)
{ {
struct virtqueue *vqs[3]; struct virtqueue *vqs[3];
vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request }; vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
@@ -215,5 +215,5 @@ index e5ce8ab..6e6cb0c 100644
u64 (*get_features)(struct virtio_device *vdev); u64 (*get_features)(struct virtio_device *vdev);
int (*finalize_features)(struct virtio_device *vdev); int (*finalize_features)(struct virtio_device *vdev);
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 514c41f6df542ae7b259f5d6aefea4af508fac94 Mon Sep 17 00:00:00 2001 From f1e0be6f17679e7f532989bae3290bb4ed3ba773 Mon Sep 17 00:00:00 2001
From: Julia Lawall <julia.lawall@lip6.fr> From: Julia Lawall <julia.lawall@lip6.fr>
Date: Sat, 21 Nov 2015 18:39:17 +0100 Date: Sat, 21 Nov 2015 18:39:17 +0100
Subject: [PATCH 02/40] VSOCK: constify vmci_transport_notify_ops structures Subject: [PATCH 02/45] VSOCK: constify vmci_transport_notify_ops structures
The vmci_transport_notify_ops structures are never modified, so declare The vmci_transport_notify_ops structures are never modified, so declare
them as const. them as const.
@@ -73,5 +73,5 @@ index dc9c792..21e591d 100644
vmci_transport_notify_pkt_socket_destruct, vmci_transport_notify_pkt_socket_destruct,
vmci_transport_notify_pkt_poll_in, vmci_transport_notify_pkt_poll_in,
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From e91f4552f7a858fa44418e1996e21b3098683de4 Mon Sep 17 00:00:00 2001 From bc63a861a8379269f4a51fdaac3d40f9161aea4d Mon Sep 17 00:00:00 2001
From: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com> From: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Date: Tue, 22 Mar 2016 17:05:52 +0100 Date: Tue, 22 Mar 2016 17:05:52 +0100
Subject: [PATCH 03/40] AF_VSOCK: Shrink the area influenced by prepare_to_wait Subject: [PATCH 03/45] AF_VSOCK: Shrink the area influenced by prepare_to_wait
When a thread is prepared for waiting by calling prepare_to_wait, sleeping When a thread is prepared for waiting by calling prepare_to_wait, sleeping
is not allowed until either the wait has taken place or finish_wait has is not allowed until either the wait has taken place or finish_wait has
@@ -332,5 +332,5 @@ index 9b5bd6d..b5f1221 100644
release_sock(sk); release_sock(sk);
return err; return err;
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 4192f672fae559f32d82de72a677701853cc98a7 Mon Sep 17 00:00:00 2001 From c1bc13ebe28532f99cb6b8edaa57a6aa61adbe58 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com> From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 23 Jun 2016 16:28:58 +0100 Date: Thu, 23 Jun 2016 16:28:58 +0100
Subject: [PATCH] vsock: make listener child lock ordering explicit Subject: [PATCH 04/45] vsock: make listener child lock ordering explicit
There are several places where the listener and pending or accept queue There are several places where the listener and pending or accept queue
child sockets are accessed at the same time. Lockdep is unhappy that child sockets are accessed at the same time. Lockdep is unhappy that
@@ -16,12 +16,13 @@ covered the vsock_pending_work() function.
Suggested-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com> Suggested-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 4192f672fae559f32d82de72a677701853cc98a7)
--- ---
net/vmw_vsock/af_vsock.c | 12 ++++++++++-- net/vmw_vsock/af_vsock.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-) 1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index b5f1221..b96ac918 100644 index b5f1221..b96ac91 100644
--- a/net/vmw_vsock/af_vsock.c --- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c
@@ -61,6 +61,14 @@ @@ -61,6 +61,14 @@
@@ -57,3 +58,6 @@ index b5f1221..b96ac918 100644
vconnected = vsock_sk(connected); vconnected = vsock_sk(connected);
/* If the listener socket has received an error, then we should /* If the listener socket has received an error, then we should
--
2.9.3

View File

@@ -1,7 +1,7 @@
From c2ec5f1e2aa8784acadbaf9f625d8ca516c81c6b Mon Sep 17 00:00:00 2001 From fdce29497e948f4a9f9417b4a908ec54feb1c9fa Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com> From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 17 Dec 2015 11:10:21 +0800 Date: Thu, 28 Jul 2016 15:36:30 +0100
Subject: [PATCH 04/40] VSOCK: transport-specific vsock_transport functions Subject: [PATCH 05/45] VSOCK: transport-specific vsock_transport functions
struct vsock_transport contains function pointers called by AF_VSOCK struct vsock_transport contains function pointers called by AF_VSOCK
core code. The transport may want its own transport-specific function core code. The transport may want its own transport-specific function
@@ -13,7 +13,7 @@ access transport-specific function pointers.
The virtio transport will use this. The virtio transport will use this.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 7740f7aafc9e6f415e8b6d5e8421deae63033b8d) (from RFC v6 <1469716595-13591-2-git-send-email-stefanha@redhat.com>)
--- ---
include/net/af_vsock.h | 3 +++ include/net/af_vsock.h | 3 +++
net/vmw_vsock/af_vsock.c | 9 +++++++++ net/vmw_vsock/af_vsock.c | 9 +++++++++
@@ -34,10 +34,10 @@ index e9eb2d6..23f5525 100644
void vsock_release_pending(struct sock *pending); void vsock_release_pending(struct sock *pending);
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index b5f1221..15f9595 100644 index b96ac91..e34d96f 100644
--- a/net/vmw_vsock/af_vsock.c --- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c
@@ -1987,6 +1987,15 @@ void vsock_core_exit(void) @@ -1995,6 +1995,15 @@ void vsock_core_exit(void)
} }
EXPORT_SYMBOL_GPL(vsock_core_exit); EXPORT_SYMBOL_GPL(vsock_core_exit);
@@ -54,5 +54,5 @@ index b5f1221..15f9595 100644
MODULE_DESCRIPTION("VMware Virtual Socket Family"); MODULE_DESCRIPTION("VMware Virtual Socket Family");
MODULE_VERSION("1.0.1.0-k"); MODULE_VERSION("1.0.1.0-k");
-- --
2.9.0 2.9.3

View File

@@ -0,0 +1,82 @@
From 30d9aa8b6b1c2fa720917ede2315223ca0c5d538 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 28 Jul 2016 15:36:31 +0100
Subject: [PATCH 06/45] VSOCK: defer sock removal to transports
The virtio transport will implement graceful shutdown and the related
SO_LINGER socket option. This requires orphaning the sock but keeping
it in the table of connections after .release().
This patch adds the vsock_remove_sock() function and leaves it up to the
transport when to remove the sock.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(from RFC v6 <1469716595-13591-3-git-send-email-stefanha@redhat.com>)
---
include/net/af_vsock.h | 1 +
net/vmw_vsock/af_vsock.c | 16 ++++++++++------
net/vmw_vsock/vmci_transport.c | 2 ++
3 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
index 23f5525..3af0b22 100644
--- a/include/net/af_vsock.h
+++ b/include/net/af_vsock.h
@@ -180,6 +180,7 @@ void vsock_remove_connected(struct vsock_sock *vsk);
struct sock *vsock_find_bound_socket(struct sockaddr_vm *addr);
struct sock *vsock_find_connected_socket(struct sockaddr_vm *src,
struct sockaddr_vm *dst);
+void vsock_remove_sock(struct vsock_sock *vsk);
void vsock_for_each_connected_socket(void (*fn)(struct sock *sk));
#endif /* __AF_VSOCK_H__ */
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index e34d96f..17dbbe6 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -344,6 +344,16 @@ static bool vsock_in_connected_table(struct vsock_sock *vsk)
return ret;
}
+void vsock_remove_sock(struct vsock_sock *vsk)
+{
+ if (vsock_in_bound_table(vsk))
+ vsock_remove_bound(vsk);
+
+ if (vsock_in_connected_table(vsk))
+ vsock_remove_connected(vsk);
+}
+EXPORT_SYMBOL_GPL(vsock_remove_sock);
+
void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
{
int i;
@@ -660,12 +670,6 @@ static void __vsock_release(struct sock *sk)
vsk = vsock_sk(sk);
pending = NULL; /* Compiler warning. */
- if (vsock_in_bound_table(vsk))
- vsock_remove_bound(vsk);
-
- if (vsock_in_connected_table(vsk))
- vsock_remove_connected(vsk);
-
transport->release(vsk);
lock_sock(sk);
diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
index 0a369bb..706991e 100644
--- a/net/vmw_vsock/vmci_transport.c
+++ b/net/vmw_vsock/vmci_transport.c
@@ -1644,6 +1644,8 @@ static void vmci_transport_destruct(struct vsock_sock *vsk)
static void vmci_transport_release(struct vsock_sock *vsk)
{
+ vsock_remove_sock(vsk);
+
if (!vmci_handle_is_invalid(vmci_trans(vsk)->dg_handle)) {
vmci_datagram_destroy_handle(vmci_trans(vsk)->dg_handle);
vmci_trans(vsk)->dg_handle = VMCI_INVALID_HANDLE;
--
2.9.3

View File

@@ -1,31 +1,35 @@
From c3d222b1921fc5c9a6d10b2d2f2b0141fcc0741e Mon Sep 17 00:00:00 2001 From 3c8c257cbb209af1d2a8a7bed7e4fb385e82b813 Mon Sep 17 00:00:00 2001
From: Asias He <asias@redhat.com> From: Asias He <asias@redhat.com>
Date: Thu, 13 Jun 2013 18:27:00 +0800 Date: Thu, 28 Jul 2016 15:36:32 +0100
Subject: [PATCH 05/40] VSOCK: Introduce virtio_vsock_common.ko Subject: [PATCH 07/45] VSOCK: Introduce virtio_vsock_common.ko
This module contains the common code and header files for the following This module contains the common code and header files for the following
virtio_transporto and vhost_vsock kernel modules. virtio_transporto and vhost_vsock kernel modules.
Signed-off-by: Asias He <asias@redhat.com> Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(from RFC v6 <1469716595-13591-4-git-send-email-stefanha@redhat.com>)
--- ---
MAINTAINERS | 10 + MAINTAINERS | 10 +
include/linux/virtio_vsock.h | 167 ++++ include/linux/virtio_vsock.h | 154 ++++
.../trace/events/vsock_virtio_transport_common.h | 144 ++++ include/net/af_vsock.h | 2 +
.../trace/events/vsock_virtio_transport_common.h | 144 +++
include/uapi/linux/Kbuild | 1 +
include/uapi/linux/virtio_ids.h | 1 + include/uapi/linux/virtio_ids.h | 1 +
include/uapi/linux/virtio_vsock.h | 94 +++ include/uapi/linux/virtio_vsock.h | 94 ++
net/vmw_vsock/virtio_transport_common.c | 838 +++++++++++++++++++++ net/vmw_vsock/virtio_transport_common.c | 992 +++++++++++++++++++++
6 files changed, 1254 insertions(+) 8 files changed, 1398 insertions(+)
create mode 100644 include/linux/virtio_vsock.h create mode 100644 include/linux/virtio_vsock.h
create mode 100644 include/trace/events/vsock_virtio_transport_common.h create mode 100644 include/trace/events/vsock_virtio_transport_common.h
create mode 100644 include/uapi/linux/virtio_vsock.h create mode 100644 include/uapi/linux/virtio_vsock.h
create mode 100644 net/vmw_vsock/virtio_transport_common.c create mode 100644 net/vmw_vsock/virtio_transport_common.c
diff --git a/MAINTAINERS b/MAINTAINERS diff --git a/MAINTAINERS b/MAINTAINERS
index ab65bbe..b93ba8b 100644 index 48bd523..3e60f59 100644
--- a/MAINTAINERS --- a/MAINTAINERS
+++ b/MAINTAINERS +++ b/MAINTAINERS
@@ -11382,6 +11382,16 @@ S: Maintained @@ -11395,6 +11395,16 @@ S: Maintained
F: drivers/media/v4l2-core/videobuf2-* F: drivers/media/v4l2-core/videobuf2-*
F: include/media/videobuf2-* F: include/media/videobuf2-*
@@ -44,10 +48,10 @@ index ab65bbe..b93ba8b 100644
S: Maintained S: Maintained
diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
new file mode 100644 new file mode 100644
index 0000000..4c3d8e6 index 0000000..9638bfe
--- /dev/null --- /dev/null
+++ b/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h
@@ -0,0 +1,167 @@ @@ -0,0 +1,154 @@
+#ifndef _LINUX_VIRTIO_VSOCK_H +#ifndef _LINUX_VIRTIO_VSOCK_H
+#define _LINUX_VIRTIO_VSOCK_H +#define _LINUX_VIRTIO_VSOCK_H
+ +
@@ -62,8 +66,6 @@ index 0000000..4c3d8e6
+#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4) +#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4)
+#define VIRTIO_VSOCK_MAX_BUF_SIZE 0xFFFFFFFFUL +#define VIRTIO_VSOCK_MAX_BUF_SIZE 0xFFFFFFFFUL
+#define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 64) +#define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 64)
+#define VIRTIO_VSOCK_MAX_TX_BUF_SIZE (1024 * 1024 * 16)
+#define VIRTIO_VSOCK_MAX_DGRAM_SIZE (1024 * 64)
+ +
+enum { +enum {
+ VSOCK_VQ_RX = 0, /* for host to guest data */ + VSOCK_VQ_RX = 0, /* for host to guest data */
@@ -81,8 +83,8 @@ index 0000000..4c3d8e6
+ u32 buf_size_min; + u32 buf_size_min;
+ u32 buf_size_max; + u32 buf_size_max;
+ +
+ struct mutex tx_lock; + spinlock_t tx_lock;
+ struct mutex rx_lock; + spinlock_t rx_lock;
+ +
+ /* Protected by tx_lock */ + /* Protected by tx_lock */
+ u32 tx_cnt; + u32 tx_cnt;
@@ -103,6 +105,7 @@ index 0000000..4c3d8e6
+ void *buf; + void *buf;
+ u32 len; + u32 len;
+ u32 off; + u32 off;
+ bool reply;
+}; +};
+ +
+struct virtio_vsock_pkt_info { +struct virtio_vsock_pkt_info {
@@ -112,29 +115,17 @@ index 0000000..4c3d8e6
+ u16 type; + u16 type;
+ u16 op; + u16 op;
+ u32 flags; + u32 flags;
+ bool reply;
+}; +};
+ +
+struct virtio_transport { +struct virtio_transport {
+ /* This must be the first field */ + /* This must be the first field */
+ struct vsock_transport transport; + struct vsock_transport transport;
+ +
+ /* Send packet for a specific socket */ + /* Takes ownership of the packet */
+ int (*send_pkt)(struct vsock_sock *vsk, + int (*send_pkt)(struct virtio_vsock_pkt *pkt);
+ struct virtio_vsock_pkt_info *info);
+
+ /* Send packet without a socket (e.g. RST). Prefer send_pkt() over
+ * send_pkt_no_sock() when a socket exists.
+ */
+ int (*send_pkt_no_sock)(struct virtio_vsock_pkt *pkt);
+}; +};
+ +
+struct virtio_vsock_pkt *
+virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info,
+ size_t len,
+ u32 src_cid,
+ u32 src_port,
+ u32 dst_cid,
+ u32 dst_port);
+ssize_t +ssize_t
+virtio_transport_stream_dequeue(struct vsock_sock *vsk, +virtio_transport_stream_dequeue(struct vsock_sock *vsk,
+ struct msghdr *msg, + struct msghdr *msg,
@@ -215,6 +206,19 @@ index 0000000..4c3d8e6
+void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit); +void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit);
+ +
+#endif /* _LINUX_VIRTIO_VSOCK_H */ +#endif /* _LINUX_VIRTIO_VSOCK_H */
diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
index 3af0b22..f275896 100644
--- a/include/net/af_vsock.h
+++ b/include/net/af_vsock.h
@@ -63,6 +63,8 @@ struct vsock_sock {
struct list_head accept_queue;
bool rejected;
struct delayed_work dwork;
+ struct delayed_work close_work;
+ bool close_work_scheduled;
u32 peer_shutdown;
bool sent_request;
bool ignore_connecting_rst;
diff --git a/include/trace/events/vsock_virtio_transport_common.h b/include/trace/events/vsock_virtio_transport_common.h diff --git a/include/trace/events/vsock_virtio_transport_common.h b/include/trace/events/vsock_virtio_transport_common.h
new file mode 100644 new file mode 100644
index 0000000..b7f1d62 index 0000000..b7f1d62
@@ -365,6 +369,18 @@ index 0000000..b7f1d62
+ +
+/* This part must be outside protection */ +/* This part must be outside protection */
+#include <trace/define_trace.h> +#include <trace/define_trace.h>
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 32152e7..c830e9f 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -448,6 +448,7 @@ header-y += virtio_ring.h
header-y += virtio_rng.h
header-y += virtio_scsi.h
header-y += virtio_types.h
+header-y += virtio_vsock.h
header-y += vm_sockets.h
header-y += vt.h
header-y += wait.h
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 77925f5..3228d58 100644 index 77925f5..3228d58 100644
--- a/include/uapi/linux/virtio_ids.h --- a/include/uapi/linux/virtio_ids.h
@@ -378,7 +394,7 @@ index 77925f5..3228d58 100644
#endif /* _LINUX_VIRTIO_IDS_H */ #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_vsock.h b/include/uapi/linux/virtio_vsock.h diff --git a/include/uapi/linux/virtio_vsock.h b/include/uapi/linux/virtio_vsock.h
new file mode 100644 new file mode 100644
index 0000000..12946ab index 0000000..6b011c1
--- /dev/null --- /dev/null
+++ b/include/uapi/linux/virtio_vsock.h +++ b/include/uapi/linux/virtio_vsock.h
@@ -0,0 +1,94 @@ @@ -0,0 +1,94 @@
@@ -423,8 +439,8 @@ index 0000000..12946ab
+#include <linux/virtio_config.h> +#include <linux/virtio_config.h>
+ +
+struct virtio_vsock_config { +struct virtio_vsock_config {
+ __le32 guest_cid; + __le64 guest_cid;
+}; +} __attribute__((packed));
+ +
+enum virtio_vsock_event_id { +enum virtio_vsock_event_id {
+ VIRTIO_VSOCK_EVENT_TRANSPORT_RESET = 0, + VIRTIO_VSOCK_EVENT_TRANSPORT_RESET = 0,
@@ -432,12 +448,12 @@ index 0000000..12946ab
+ +
+struct virtio_vsock_event { +struct virtio_vsock_event {
+ __le32 id; + __le32 id;
+}; +} __attribute__((packed));
+ +
+struct virtio_vsock_hdr { +struct virtio_vsock_hdr {
+ __le32 src_cid; + __le64 src_cid;
+ __le64 dst_cid;
+ __le32 src_port; + __le32 src_port;
+ __le32 dst_cid;
+ __le32 dst_port; + __le32 dst_port;
+ __le32 len; + __le32 len;
+ __le16 type; /* enum virtio_vsock_type */ + __le16 type; /* enum virtio_vsock_type */
@@ -445,7 +461,7 @@ index 0000000..12946ab
+ __le32 flags; + __le32 flags;
+ __le32 buf_alloc; + __le32 buf_alloc;
+ __le32 fwd_cnt; + __le32 fwd_cnt;
+}; +} __attribute__((packed));
+ +
+enum virtio_vsock_type { +enum virtio_vsock_type {
+ VIRTIO_VSOCK_TYPE_STREAM = 1, + VIRTIO_VSOCK_TYPE_STREAM = 1,
@@ -478,10 +494,10 @@ index 0000000..12946ab
+#endif /* _UAPI_LINUX_VIRTIO_VSOCK_H */ +#endif /* _UAPI_LINUX_VIRTIO_VSOCK_H */
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
new file mode 100644 new file mode 100644
index 0000000..5b9e202 index 0000000..a53b3a1
--- /dev/null --- /dev/null
+++ b/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c
@@ -0,0 +1,838 @@ @@ -0,0 +1,992 @@
+/* +/*
+ * common code for virtio vsock + * common code for virtio vsock
+ * + *
@@ -491,6 +507,7 @@ index 0000000..5b9e202
+ * + *
+ * This work is licensed under the terms of the GNU GPL, version 2. + * This work is licensed under the terms of the GNU GPL, version 2.
+ */ + */
+#include <linux/spinlock.h>
+#include <linux/module.h> +#include <linux/module.h>
+#include <linux/ctype.h> +#include <linux/ctype.h>
+#include <linux/list.h> +#include <linux/list.h>
@@ -505,6 +522,9 @@ index 0000000..5b9e202
+#define CREATE_TRACE_POINTS +#define CREATE_TRACE_POINTS
+#include <trace/events/vsock_virtio_transport_common.h> +#include <trace/events/vsock_virtio_transport_common.h>
+ +
+/* How long to wait for graceful shutdown of a connection */
+#define VSOCK_CLOSE_TIMEOUT (8 * HZ)
+
+static const struct virtio_transport *virtio_transport_get_ops(void) +static const struct virtio_transport *virtio_transport_get_ops(void)
+{ +{
+ const struct vsock_transport *t = vsock_core_get_transport(); + const struct vsock_transport *t = vsock_core_get_transport();
@@ -512,17 +532,6 @@ index 0000000..5b9e202
+ return container_of(t, struct virtio_transport, transport); + return container_of(t, struct virtio_transport, transport);
+} +}
+ +
+static int virtio_transport_send_pkt(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt_info *info)
+{
+ return virtio_transport_get_ops()->send_pkt(vsk, info);
+}
+
+static int virtio_transport_send_pkt_no_sock(struct virtio_vsock_pkt *pkt)
+{
+ return virtio_transport_get_ops()->send_pkt_no_sock(pkt);
+}
+
+struct virtio_vsock_pkt * +struct virtio_vsock_pkt *
+virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info, +virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info,
+ size_t len, + size_t len,
@@ -540,13 +549,14 @@ index 0000000..5b9e202
+ +
+ pkt->hdr.type = cpu_to_le16(info->type); + pkt->hdr.type = cpu_to_le16(info->type);
+ pkt->hdr.op = cpu_to_le16(info->op); + pkt->hdr.op = cpu_to_le16(info->op);
+ pkt->hdr.src_cid = cpu_to_le32(src_cid); + pkt->hdr.src_cid = cpu_to_le64(src_cid);
+ pkt->hdr.dst_cid = cpu_to_le64(dst_cid);
+ pkt->hdr.src_port = cpu_to_le32(src_port); + pkt->hdr.src_port = cpu_to_le32(src_port);
+ pkt->hdr.dst_cid = cpu_to_le32(dst_cid);
+ pkt->hdr.dst_port = cpu_to_le32(dst_port); + pkt->hdr.dst_port = cpu_to_le32(dst_port);
+ pkt->hdr.flags = cpu_to_le32(info->flags); + pkt->hdr.flags = cpu_to_le32(info->flags);
+ pkt->len = len; + pkt->len = len;
+ pkt->hdr.len = cpu_to_le32(len); + pkt->hdr.len = cpu_to_le32(len);
+ pkt->reply = info->reply;
+ +
+ if (info->msg && len > 0) { + if (info->msg && len > 0) {
+ pkt->buf = kmalloc(len, GFP_KERNEL); + pkt->buf = kmalloc(len, GFP_KERNEL);
@@ -574,6 +584,50 @@ index 0000000..5b9e202
+} +}
+EXPORT_SYMBOL_GPL(virtio_transport_alloc_pkt); +EXPORT_SYMBOL_GPL(virtio_transport_alloc_pkt);
+ +
+static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt_info *info)
+{
+ u32 src_cid, src_port, dst_cid, dst_port;
+ struct virtio_vsock_sock *vvs;
+ struct virtio_vsock_pkt *pkt;
+ u32 pkt_len = info->pkt_len;
+
+ src_cid = vm_sockets_get_local_cid();
+ src_port = vsk->local_addr.svm_port;
+ if (!info->remote_cid) {
+ dst_cid = vsk->remote_addr.svm_cid;
+ dst_port = vsk->remote_addr.svm_port;
+ } else {
+ dst_cid = info->remote_cid;
+ dst_port = info->remote_port;
+ }
+
+ vvs = vsk->trans;
+
+ /* we can send less than pkt_len bytes */
+ if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE)
+ pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
+
+ /* virtio_transport_get_credit might return less than pkt_len credit */
+ pkt_len = virtio_transport_get_credit(vvs, pkt_len);
+
+ /* Do not send zero length OP_RW pkt */
+ if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW)
+ return pkt_len;
+
+ pkt = virtio_transport_alloc_pkt(info, pkt_len,
+ src_cid, src_port,
+ dst_cid, dst_port);
+ if (!pkt) {
+ virtio_transport_put_credit(vvs, pkt_len);
+ return -ENOMEM;
+ }
+
+ virtio_transport_inc_tx_pkt(vvs, pkt);
+
+ return virtio_transport_get_ops()->send_pkt(pkt);
+}
+
+static void virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs, +static void virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs,
+ struct virtio_vsock_pkt *pkt) + struct virtio_vsock_pkt *pkt)
+{ +{
@@ -589,10 +643,10 @@ index 0000000..5b9e202
+ +
+void virtio_transport_inc_tx_pkt(struct virtio_vsock_sock *vvs, struct virtio_vsock_pkt *pkt) +void virtio_transport_inc_tx_pkt(struct virtio_vsock_sock *vvs, struct virtio_vsock_pkt *pkt)
+{ +{
+ mutex_lock(&vvs->tx_lock); + spin_lock_bh(&vvs->tx_lock);
+ pkt->hdr.fwd_cnt = cpu_to_le32(vvs->fwd_cnt); + pkt->hdr.fwd_cnt = cpu_to_le32(vvs->fwd_cnt);
+ pkt->hdr.buf_alloc = cpu_to_le32(vvs->buf_alloc); + pkt->hdr.buf_alloc = cpu_to_le32(vvs->buf_alloc);
+ mutex_unlock(&vvs->tx_lock); + spin_unlock_bh(&vvs->tx_lock);
+} +}
+EXPORT_SYMBOL_GPL(virtio_transport_inc_tx_pkt); +EXPORT_SYMBOL_GPL(virtio_transport_inc_tx_pkt);
+ +
@@ -600,12 +654,12 @@ index 0000000..5b9e202
+{ +{
+ u32 ret; + u32 ret;
+ +
+ mutex_lock(&vvs->tx_lock); + spin_lock_bh(&vvs->tx_lock);
+ ret = vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt); + ret = vvs->peer_buf_alloc - (vvs->tx_cnt - vvs->peer_fwd_cnt);
+ if (ret > credit) + if (ret > credit)
+ ret = credit; + ret = credit;
+ vvs->tx_cnt += ret; + vvs->tx_cnt += ret;
+ mutex_unlock(&vvs->tx_lock); + spin_unlock_bh(&vvs->tx_lock);
+ +
+ return ret; + return ret;
+} +}
@@ -613,9 +667,9 @@ index 0000000..5b9e202
+ +
+void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit) +void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit)
+{ +{
+ mutex_lock(&vvs->tx_lock); + spin_lock_bh(&vvs->tx_lock);
+ vvs->tx_cnt -= credit; + vvs->tx_cnt -= credit;
+ mutex_unlock(&vvs->tx_lock); + spin_unlock_bh(&vvs->tx_lock);
+} +}
+EXPORT_SYMBOL_GPL(virtio_transport_put_credit); +EXPORT_SYMBOL_GPL(virtio_transport_put_credit);
+ +
@@ -628,7 +682,7 @@ index 0000000..5b9e202
+ .type = type, + .type = type,
+ }; + };
+ +
+ return virtio_transport_send_pkt(vsk, &info); + return virtio_transport_send_pkt_info(vsk, &info);
+} +}
+ +
+static ssize_t +static ssize_t
@@ -641,10 +695,8 @@ index 0000000..5b9e202
+ size_t bytes, total = 0; + size_t bytes, total = 0;
+ int err = -EFAULT; + int err = -EFAULT;
+ +
+ mutex_lock(&vvs->rx_lock); + spin_lock_bh(&vvs->rx_lock);
+ while (total < len && + while (total < len && !list_empty(&vvs->rx_queue)) {
+ vvs->rx_bytes > 0 &&
+ !list_empty(&vvs->rx_queue)) {
+ pkt = list_first_entry(&vvs->rx_queue, + pkt = list_first_entry(&vvs->rx_queue,
+ struct virtio_vsock_pkt, list); + struct virtio_vsock_pkt, list);
+ +
@@ -652,9 +704,17 @@ index 0000000..5b9e202
+ if (bytes > pkt->len - pkt->off) + if (bytes > pkt->len - pkt->off)
+ bytes = pkt->len - pkt->off; + bytes = pkt->len - pkt->off;
+ +
+ /* sk_lock is held by caller so no one else can dequeue.
+ * Unlock rx_lock since memcpy_to_msg() may sleep.
+ */
+ spin_unlock_bh(&vvs->rx_lock);
+
+ err = memcpy_to_msg(msg, pkt->buf + pkt->off, bytes); + err = memcpy_to_msg(msg, pkt->buf + pkt->off, bytes);
+ if (err) + if (err)
+ goto out; + goto out;
+
+ spin_lock_bh(&vvs->rx_lock);
+
+ total += bytes; + total += bytes;
+ pkt->off += bytes; + pkt->off += bytes;
+ if (pkt->off == pkt->len) { + if (pkt->off == pkt->len) {
@@ -663,7 +723,7 @@ index 0000000..5b9e202
+ virtio_transport_free_pkt(pkt); + virtio_transport_free_pkt(pkt);
+ } + }
+ } + }
+ mutex_unlock(&vvs->rx_lock); + spin_unlock_bh(&vvs->rx_lock);
+ +
+ /* Send a credit pkt to peer */ + /* Send a credit pkt to peer */
+ virtio_transport_send_credit_update(vsk, VIRTIO_VSOCK_TYPE_STREAM, + virtio_transport_send_credit_update(vsk, VIRTIO_VSOCK_TYPE_STREAM,
@@ -672,7 +732,6 @@ index 0000000..5b9e202
+ return total; + return total;
+ +
+out: +out:
+ mutex_unlock(&vvs->rx_lock);
+ if (total) + if (total)
+ err = total; + err = total;
+ return err; + return err;
@@ -704,9 +763,9 @@ index 0000000..5b9e202
+ struct virtio_vsock_sock *vvs = vsk->trans; + struct virtio_vsock_sock *vvs = vsk->trans;
+ s64 bytes; + s64 bytes;
+ +
+ mutex_lock(&vvs->rx_lock); + spin_lock_bh(&vvs->rx_lock);
+ bytes = vvs->rx_bytes; + bytes = vvs->rx_bytes;
+ mutex_unlock(&vvs->rx_lock); + spin_unlock_bh(&vvs->rx_lock);
+ +
+ return bytes; + return bytes;
+} +}
@@ -729,9 +788,9 @@ index 0000000..5b9e202
+ struct virtio_vsock_sock *vvs = vsk->trans; + struct virtio_vsock_sock *vvs = vsk->trans;
+ s64 bytes; + s64 bytes;
+ +
+ mutex_lock(&vvs->tx_lock); + spin_lock_bh(&vvs->tx_lock);
+ bytes = virtio_transport_has_space(vsk); + bytes = virtio_transport_has_space(vsk);
+ mutex_unlock(&vvs->tx_lock); + spin_unlock_bh(&vvs->tx_lock);
+ +
+ return bytes; + return bytes;
+} +}
@@ -763,8 +822,8 @@ index 0000000..5b9e202
+ +
+ vvs->buf_alloc = vvs->buf_size; + vvs->buf_alloc = vvs->buf_size;
+ +
+ mutex_init(&vvs->rx_lock); + spin_lock_init(&vvs->rx_lock);
+ mutex_init(&vvs->tx_lock); + spin_lock_init(&vvs->tx_lock);
+ INIT_LIST_HEAD(&vvs->rx_queue); + INIT_LIST_HEAD(&vvs->rx_queue);
+ +
+ return 0; + return 0;
@@ -962,7 +1021,7 @@ index 0000000..5b9e202
+ .type = VIRTIO_VSOCK_TYPE_STREAM, + .type = VIRTIO_VSOCK_TYPE_STREAM,
+ }; + };
+ +
+ return virtio_transport_send_pkt(vsk, &info); + return virtio_transport_send_pkt_info(vsk, &info);
+} +}
+EXPORT_SYMBOL_GPL(virtio_transport_connect); +EXPORT_SYMBOL_GPL(virtio_transport_connect);
+ +
@@ -977,22 +1036,10 @@ index 0000000..5b9e202
+ VIRTIO_VSOCK_SHUTDOWN_SEND : 0), + VIRTIO_VSOCK_SHUTDOWN_SEND : 0),
+ }; + };
+ +
+ return virtio_transport_send_pkt(vsk, &info); + return virtio_transport_send_pkt_info(vsk, &info);
+} +}
+EXPORT_SYMBOL_GPL(virtio_transport_shutdown); +EXPORT_SYMBOL_GPL(virtio_transport_shutdown);
+ +
+void virtio_transport_release(struct vsock_sock *vsk)
+{
+ struct sock *sk = &vsk->sk;
+
+ /* Tell other side to terminate connection */
+ if (sk->sk_type == SOCK_STREAM &&
+ vsk->peer_shutdown != SHUTDOWN_MASK &&
+ sk->sk_state == SS_CONNECTED)
+ (void)virtio_transport_shutdown(vsk, SHUTDOWN_MASK);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_release);
+
+int +int
+virtio_transport_dgram_enqueue(struct vsock_sock *vsk, +virtio_transport_dgram_enqueue(struct vsock_sock *vsk,
+ struct sockaddr_vm *remote_addr, + struct sockaddr_vm *remote_addr,
@@ -1015,7 +1062,7 @@ index 0000000..5b9e202
+ .pkt_len = len, + .pkt_len = len,
+ }; + };
+ +
+ return virtio_transport_send_pkt(vsk, &info); + return virtio_transport_send_pkt_info(vsk, &info);
+} +}
+EXPORT_SYMBOL_GPL(virtio_transport_stream_enqueue); +EXPORT_SYMBOL_GPL(virtio_transport_stream_enqueue);
+ +
@@ -1027,29 +1074,31 @@ index 0000000..5b9e202
+} +}
+EXPORT_SYMBOL_GPL(virtio_transport_destruct); +EXPORT_SYMBOL_GPL(virtio_transport_destruct);
+ +
+static int virtio_transport_send_reset(struct vsock_sock *vsk, +static int virtio_transport_reset(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt *pkt) + struct virtio_vsock_pkt *pkt)
+{ +{
+ struct virtio_vsock_pkt_info info = { + struct virtio_vsock_pkt_info info = {
+ .op = VIRTIO_VSOCK_OP_RST, + .op = VIRTIO_VSOCK_OP_RST,
+ .type = VIRTIO_VSOCK_TYPE_STREAM, + .type = VIRTIO_VSOCK_TYPE_STREAM,
+ .reply = !!pkt,
+ }; + };
+ +
+ /* Send RST only if the original pkt is not a RST pkt */ + /* Send RST only if the original pkt is not a RST pkt */
+ if (le16_to_cpu(pkt->hdr.op) == VIRTIO_VSOCK_OP_RST) + if (pkt && le16_to_cpu(pkt->hdr.op) == VIRTIO_VSOCK_OP_RST)
+ return 0; + return 0;
+ +
+ return virtio_transport_send_pkt(vsk, &info); + return virtio_transport_send_pkt_info(vsk, &info);
+} +}
+ +
+/* Normally packets are associated with a socket. There may be no socket if an +/* Normally packets are associated with a socket. There may be no socket if an
+ * attempt was made to connect to a socket that does not exist. + * attempt was made to connect to a socket that does not exist.
+ */ + */
+static int virtio_transport_send_reset_no_sock(struct virtio_vsock_pkt *pkt) +static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
+{ +{
+ struct virtio_vsock_pkt_info info = { + struct virtio_vsock_pkt_info info = {
+ .op = VIRTIO_VSOCK_OP_RST, + .op = VIRTIO_VSOCK_OP_RST,
+ .type = le16_to_cpu(pkt->hdr.type), + .type = le16_to_cpu(pkt->hdr.type),
+ .reply = true,
+ }; + };
+ +
+ /* Send RST only if the original pkt is not a RST pkt */ + /* Send RST only if the original pkt is not a RST pkt */
@@ -1064,9 +1113,117 @@ index 0000000..5b9e202
+ if (!pkt) + if (!pkt)
+ return -ENOMEM; + return -ENOMEM;
+ +
+ return virtio_transport_send_pkt_no_sock(pkt); + return virtio_transport_get_ops()->send_pkt(pkt);
+} +}
+ +
+static void virtio_transport_wait_close(struct sock *sk, long timeout)
+{
+ if (timeout) {
+ DEFINE_WAIT(wait);
+
+ do {
+ prepare_to_wait(sk_sleep(sk), &wait,
+ TASK_INTERRUPTIBLE);
+ if (sk_wait_event(sk, &timeout,
+ sock_flag(sk, SOCK_DONE)))
+ break;
+ } while (!signal_pending(current) && timeout);
+
+ finish_wait(sk_sleep(sk), &wait);
+ }
+}
+
+static void virtio_transport_do_close(struct vsock_sock *vsk,
+ bool cancel_timeout)
+{
+ struct sock *sk = sk_vsock(vsk);
+
+ sock_set_flag(sk, SOCK_DONE);
+ vsk->peer_shutdown = SHUTDOWN_MASK;
+ if (vsock_stream_has_data(vsk) <= 0)
+ sk->sk_state = SS_DISCONNECTING;
+ sk->sk_state_change(sk);
+
+ if (vsk->close_work_scheduled &&
+ (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) {
+ vsk->close_work_scheduled = false;
+
+ vsock_remove_sock(vsk);
+
+ /* Release refcnt obtained when we scheduled the timeout */
+ sock_put(sk);
+ }
+}
+
+static void virtio_transport_close_timeout(struct work_struct *work)
+{
+ struct vsock_sock *vsk =
+ container_of(work, struct vsock_sock, close_work.work);
+ struct sock *sk = sk_vsock(vsk);
+
+ sock_hold(sk);
+ lock_sock(sk);
+
+ if (!sock_flag(sk, SOCK_DONE)) {
+ (void)virtio_transport_reset(vsk, NULL);
+
+ virtio_transport_do_close(vsk, false);
+ }
+
+ vsk->close_work_scheduled = false;
+
+ release_sock(sk);
+ sock_put(sk);
+}
+
+/* User context, vsk->sk is locked */
+static bool virtio_transport_close(struct vsock_sock *vsk)
+{
+ struct sock *sk = &vsk->sk;
+
+ if (!(sk->sk_state == SS_CONNECTED ||
+ sk->sk_state == SS_DISCONNECTING))
+ return true;
+
+ /* Already received SHUTDOWN from peer, reply with RST */
+ if ((vsk->peer_shutdown & SHUTDOWN_MASK) == SHUTDOWN_MASK) {
+ (void)virtio_transport_reset(vsk, NULL);
+ return true;
+ }
+
+ if ((sk->sk_shutdown & SHUTDOWN_MASK) != SHUTDOWN_MASK)
+ (void)virtio_transport_shutdown(vsk, SHUTDOWN_MASK);
+
+ if (sock_flag(sk, SOCK_LINGER) && !(current->flags & PF_EXITING))
+ virtio_transport_wait_close(sk, sk->sk_lingertime);
+
+ if (sock_flag(sk, SOCK_DONE)) {
+ return true;
+ }
+
+ sock_hold(sk);
+ INIT_DELAYED_WORK(&vsk->close_work,
+ virtio_transport_close_timeout);
+ vsk->close_work_scheduled = true;
+ schedule_delayed_work(&vsk->close_work, VSOCK_CLOSE_TIMEOUT);
+ return false;
+}
+
+void virtio_transport_release(struct vsock_sock *vsk)
+{
+ struct sock *sk = &vsk->sk;
+ bool remove_sock = true;
+
+ lock_sock(sk);
+ if (sk->sk_type == SOCK_STREAM)
+ remove_sock = virtio_transport_close(vsk);
+ release_sock(sk);
+
+ if (remove_sock)
+ vsock_remove_sock(vsk);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_release);
+
+static int +static int
+virtio_transport_recv_connecting(struct sock *sk, +virtio_transport_recv_connecting(struct sock *sk,
+ struct virtio_vsock_pkt *pkt) + struct virtio_vsock_pkt *pkt)
@@ -1096,7 +1253,7 @@ index 0000000..5b9e202
+ return 0; + return 0;
+ +
+destroy: +destroy:
+ virtio_transport_send_reset(vsk, pkt); + virtio_transport_reset(vsk, pkt);
+ sk->sk_state = SS_UNCONNECTED; + sk->sk_state = SS_UNCONNECTED;
+ sk->sk_err = skerr; + sk->sk_err = skerr;
+ sk->sk_error_report(sk); + sk->sk_error_report(sk);
@@ -1116,10 +1273,10 @@ index 0000000..5b9e202
+ pkt->len = le32_to_cpu(pkt->hdr.len); + pkt->len = le32_to_cpu(pkt->hdr.len);
+ pkt->off = 0; + pkt->off = 0;
+ +
+ mutex_lock(&vvs->rx_lock); + spin_lock_bh(&vvs->rx_lock);
+ virtio_transport_inc_rx_pkt(vvs, pkt); + virtio_transport_inc_rx_pkt(vvs, pkt);
+ list_add_tail(&pkt->list, &vvs->rx_queue); + list_add_tail(&pkt->list, &vvs->rx_queue);
+ mutex_unlock(&vvs->rx_lock); + spin_unlock_bh(&vvs->rx_lock);
+ +
+ sk->sk_data_ready(sk); + sk->sk_data_ready(sk);
+ return err; + return err;
@@ -1138,11 +1295,7 @@ index 0000000..5b9e202
+ sk->sk_state_change(sk); + sk->sk_state_change(sk);
+ break; + break;
+ case VIRTIO_VSOCK_OP_RST: + case VIRTIO_VSOCK_OP_RST:
+ sock_set_flag(sk, SOCK_DONE); + virtio_transport_do_close(vsk, true);
+ vsk->peer_shutdown = SHUTDOWN_MASK;
+ if (vsock_stream_has_data(vsk) <= 0)
+ sk->sk_state = SS_DISCONNECTING;
+ sk->sk_state_change(sk);
+ break; + break;
+ default: + default:
+ err = -EINVAL; + err = -EINVAL;
@@ -1153,6 +1306,16 @@ index 0000000..5b9e202
+ return err; + return err;
+} +}
+ +
+static void
+virtio_transport_recv_disconnecting(struct sock *sk,
+ struct virtio_vsock_pkt *pkt)
+{
+ struct vsock_sock *vsk = vsock_sk(sk);
+
+ if (le16_to_cpu(pkt->hdr.op) == VIRTIO_VSOCK_OP_RST)
+ virtio_transport_do_close(vsk, true);
+}
+
+static int +static int
+virtio_transport_send_response(struct vsock_sock *vsk, +virtio_transport_send_response(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt *pkt) + struct virtio_vsock_pkt *pkt)
@@ -1162,9 +1325,10 @@ index 0000000..5b9e202
+ .type = VIRTIO_VSOCK_TYPE_STREAM, + .type = VIRTIO_VSOCK_TYPE_STREAM,
+ .remote_cid = le32_to_cpu(pkt->hdr.src_cid), + .remote_cid = le32_to_cpu(pkt->hdr.src_cid),
+ .remote_port = le32_to_cpu(pkt->hdr.src_port), + .remote_port = le32_to_cpu(pkt->hdr.src_port),
+ .reply = true,
+ }; + };
+ +
+ return virtio_transport_send_pkt(vsk, &info); + return virtio_transport_send_pkt_info(vsk, &info);
+} +}
+ +
+/* Handle server socket */ +/* Handle server socket */
@@ -1176,25 +1340,25 @@ index 0000000..5b9e202
+ struct sock *child; + struct sock *child;
+ +
+ if (le16_to_cpu(pkt->hdr.op) != VIRTIO_VSOCK_OP_REQUEST) { + if (le16_to_cpu(pkt->hdr.op) != VIRTIO_VSOCK_OP_REQUEST) {
+ virtio_transport_send_reset(vsk, pkt); + virtio_transport_reset(vsk, pkt);
+ return -EINVAL; + return -EINVAL;
+ } + }
+ +
+ if (sk_acceptq_is_full(sk)) { + if (sk_acceptq_is_full(sk)) {
+ virtio_transport_send_reset(vsk, pkt); + virtio_transport_reset(vsk, pkt);
+ return -ENOMEM; + return -ENOMEM;
+ } + }
+ +
+ child = __vsock_create(sock_net(sk), NULL, sk, GFP_KERNEL, + child = __vsock_create(sock_net(sk), NULL, sk, GFP_KERNEL,
+ sk->sk_type, 0); + sk->sk_type, 0);
+ if (!child) { + if (!child) {
+ virtio_transport_send_reset(vsk, pkt); + virtio_transport_reset(vsk, pkt);
+ return -ENOMEM; + return -ENOMEM;
+ } + }
+ +
+ sk->sk_ack_backlog++; + sk->sk_ack_backlog++;
+ +
+ lock_sock(child); + lock_sock_nested(child, SINGLE_DEPTH_NESTING);
+ +
+ child->sk_state = SS_CONNECTED; + child->sk_state = SS_CONNECTED;
+ +
@@ -1214,7 +1378,7 @@ index 0000000..5b9e202
+ return 0; + return 0;
+} +}
+ +
+static void virtio_transport_space_update(struct sock *sk, +static bool virtio_transport_space_update(struct sock *sk,
+ struct virtio_vsock_pkt *pkt) + struct virtio_vsock_pkt *pkt)
+{ +{
+ struct vsock_sock *vsk = vsock_sk(sk); + struct vsock_sock *vsk = vsock_sk(sk);
@@ -1222,14 +1386,12 @@ index 0000000..5b9e202
+ bool space_available; + bool space_available;
+ +
+ /* buf_alloc and fwd_cnt is always included in the hdr */ + /* buf_alloc and fwd_cnt is always included in the hdr */
+ mutex_lock(&vvs->tx_lock); + spin_lock_bh(&vvs->tx_lock);
+ vvs->peer_buf_alloc = le32_to_cpu(pkt->hdr.buf_alloc); + vvs->peer_buf_alloc = le32_to_cpu(pkt->hdr.buf_alloc);
+ vvs->peer_fwd_cnt = le32_to_cpu(pkt->hdr.fwd_cnt); + vvs->peer_fwd_cnt = le32_to_cpu(pkt->hdr.fwd_cnt);
+ space_available = virtio_transport_has_space(vsk); + space_available = virtio_transport_has_space(vsk);
+ mutex_unlock(&vvs->tx_lock); + spin_unlock_bh(&vvs->tx_lock);
+ + return space_available;
+ if (space_available)
+ sk->sk_write_space(sk);
+} +}
+ +
+/* We are under the virtio-vsock's vsock->rx_lock or vhost-vsock's vq->mutex +/* We are under the virtio-vsock's vsock->rx_lock or vhost-vsock's vq->mutex
@@ -1240,6 +1402,7 @@ index 0000000..5b9e202
+ struct sockaddr_vm src, dst; + struct sockaddr_vm src, dst;
+ struct vsock_sock *vsk; + struct vsock_sock *vsk;
+ struct sock *sk; + struct sock *sk;
+ bool space_available;
+ +
+ vsock_addr_init(&src, le32_to_cpu(pkt->hdr.src_cid), + vsock_addr_init(&src, le32_to_cpu(pkt->hdr.src_cid),
+ le32_to_cpu(pkt->hdr.src_port)); + le32_to_cpu(pkt->hdr.src_port));
@@ -1256,7 +1419,7 @@ index 0000000..5b9e202
+ le32_to_cpu(pkt->hdr.fwd_cnt)); + le32_to_cpu(pkt->hdr.fwd_cnt));
+ +
+ if (le16_to_cpu(pkt->hdr.type) != VIRTIO_VSOCK_TYPE_STREAM) { + if (le16_to_cpu(pkt->hdr.type) != VIRTIO_VSOCK_TYPE_STREAM) {
+ (void)virtio_transport_send_reset_no_sock(pkt); + (void)virtio_transport_reset_no_sock(pkt);
+ goto free_pkt; + goto free_pkt;
+ } + }
+ +
@@ -1267,20 +1430,23 @@ index 0000000..5b9e202
+ if (!sk) { + if (!sk) {
+ sk = vsock_find_bound_socket(&dst); + sk = vsock_find_bound_socket(&dst);
+ if (!sk) { + if (!sk) {
+ (void)virtio_transport_send_reset_no_sock(pkt); + (void)virtio_transport_reset_no_sock(pkt);
+ goto free_pkt; + goto free_pkt;
+ } + }
+ } + }
+ +
+ vsk = vsock_sk(sk); + vsk = vsock_sk(sk);
+ +
+ virtio_transport_space_update(sk, pkt); + space_available = virtio_transport_space_update(sk, pkt);
+ +
+ lock_sock(sk); + lock_sock(sk);
+ +
+ /* Update CID in case it has changed after a transport reset event */ + /* Update CID in case it has changed after a transport reset event */
+ vsk->local_addr.svm_cid = dst.svm_cid; + vsk->local_addr.svm_cid = dst.svm_cid;
+ +
+ if (space_available)
+ sk->sk_write_space(sk);
+
+ switch (sk->sk_state) { + switch (sk->sk_state) {
+ case VSOCK_SS_LISTEN: + case VSOCK_SS_LISTEN:
+ virtio_transport_recv_listen(sk, pkt); + virtio_transport_recv_listen(sk, pkt);
@@ -1293,6 +1459,10 @@ index 0000000..5b9e202
+ case SS_CONNECTED: + case SS_CONNECTED:
+ virtio_transport_recv_connected(sk, pkt); + virtio_transport_recv_connected(sk, pkt);
+ break; + break;
+ case SS_DISCONNECTING:
+ virtio_transport_recv_disconnecting(sk, pkt);
+ virtio_transport_free_pkt(pkt);
+ break;
+ default: + default:
+ virtio_transport_free_pkt(pkt); + virtio_transport_free_pkt(pkt);
+ break; + break;
@@ -1321,5 +1491,5 @@ index 0000000..5b9e202
+MODULE_AUTHOR("Asias He"); +MODULE_AUTHOR("Asias He");
+MODULE_DESCRIPTION("common code for virtio vsock"); +MODULE_DESCRIPTION("common code for virtio vsock");
-- --
2.9.0 2.9.3

View File

@@ -1,24 +1,25 @@
From 425faa8655fbbe9191ddc88fe57097e8be2fdf44 Mon Sep 17 00:00:00 2001 From c6a12128d92b89b3d424e7cd3dd3c6cbfe4e1011 Mon Sep 17 00:00:00 2001
From: Asias He <asias@redhat.com> From: Asias He <asias@redhat.com>
Date: Thu, 13 Jun 2013 18:28:48 +0800 Date: Thu, 28 Jul 2016 15:36:33 +0100
Subject: [PATCH 06/40] VSOCK: Introduce virtio_transport.ko Subject: [PATCH 08/45] VSOCK: Introduce virtio_transport.ko
VM sockets virtio transport implementation. This driver runs in the VM sockets virtio transport implementation. This driver runs in the
guest. guest.
Signed-off-by: Asias He <asias@redhat.com> Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(from RFC v6 <1469716595-13591-5-git-send-email-stefanha@redhat.com>)
--- ---
MAINTAINERS | 1 + MAINTAINERS | 1 +
net/vmw_vsock/virtio_transport.c | 584 +++++++++++++++++++++++++++++++++++++++ net/vmw_vsock/virtio_transport.c | 624 +++++++++++++++++++++++++++++++++++++++
2 files changed, 585 insertions(+) 2 files changed, 625 insertions(+)
create mode 100644 net/vmw_vsock/virtio_transport.c create mode 100644 net/vmw_vsock/virtio_transport.c
diff --git a/MAINTAINERS b/MAINTAINERS diff --git a/MAINTAINERS b/MAINTAINERS
index b93ba8b..82d1123 100644 index 3e60f59..c7e4c9a 100644
--- a/MAINTAINERS --- a/MAINTAINERS
+++ b/MAINTAINERS +++ b/MAINTAINERS
@@ -11391,6 +11391,7 @@ S: Maintained @@ -11404,6 +11404,7 @@ S: Maintained
F: include/linux/virtio_vsock.h F: include/linux/virtio_vsock.h
F: include/uapi/linux/virtio_vsock.h F: include/uapi/linux/virtio_vsock.h
F: net/vmw_vsock/virtio_transport_common.c F: net/vmw_vsock/virtio_transport_common.c
@@ -28,10 +29,10 @@ index b93ba8b..82d1123 100644
M: Stephen Chandler Paul <thatslyude@gmail.com> M: Stephen Chandler Paul <thatslyude@gmail.com>
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
new file mode 100644 new file mode 100644
index 0000000..45472e0 index 0000000..699dfab
--- /dev/null --- /dev/null
+++ b/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c
@@ -0,0 +1,584 @@ @@ -0,0 +1,624 @@
+/* +/*
+ * virtio transport for vsock + * virtio transport for vsock
+ * + *
@@ -47,6 +48,7 @@ index 0000000..45472e0
+#include <linux/spinlock.h> +#include <linux/spinlock.h>
+#include <linux/module.h> +#include <linux/module.h>
+#include <linux/list.h> +#include <linux/list.h>
+#include <linux/atomic.h>
+#include <linux/virtio.h> +#include <linux/virtio.h>
+#include <linux/virtio_ids.h> +#include <linux/virtio_ids.h>
+#include <linux/virtio_config.h> +#include <linux/virtio_config.h>
@@ -58,7 +60,6 @@ index 0000000..45472e0
+static struct workqueue_struct *virtio_vsock_workqueue; +static struct workqueue_struct *virtio_vsock_workqueue;
+static struct virtio_vsock *the_virtio_vsock; +static struct virtio_vsock *the_virtio_vsock;
+static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */ +static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */
+static void virtio_vsock_rx_fill(struct virtio_vsock *vsock);
+ +
+struct virtio_vsock { +struct virtio_vsock {
+ struct virtio_device *vdev; + struct virtio_device *vdev;
@@ -69,13 +70,16 @@ index 0000000..45472e0
+ struct work_struct rx_work; + struct work_struct rx_work;
+ struct work_struct event_work; + struct work_struct event_work;
+ +
+ wait_queue_head_t tx_wait; /* for waiting for tx resources */
+
+ /* The following fields are protected by tx_lock. vqs[VSOCK_VQ_TX] + /* The following fields are protected by tx_lock. vqs[VSOCK_VQ_TX]
+ * must be accessed with tx_lock held. + * must be accessed with tx_lock held.
+ */ + */
+ struct mutex tx_lock; + struct mutex tx_lock;
+ u32 total_tx_buf; +
+ struct work_struct send_pkt_work;
+ spinlock_t send_pkt_list_lock;
+ struct list_head send_pkt_list;
+
+ atomic_t queued_replies;
+ +
+ /* The following fields are protected by rx_lock. vqs[VSOCK_VQ_RX] + /* The following fields are protected by rx_lock. vqs[VSOCK_VQ_RX]
+ * must be accessed with rx_lock held. + * must be accessed with rx_lock held.
@@ -105,45 +109,88 @@ index 0000000..45472e0
+ return vsock->guest_cid; + return vsock->guest_cid;
+} +}
+ +
+static int +static void
+virtio_transport_send_one_pkt(struct virtio_vsock *vsock, +virtio_transport_send_pkt_work(struct work_struct *work)
+ struct virtio_vsock_pkt *pkt)
+{ +{
+ struct scatterlist hdr, buf, *sgs[2]; + struct virtio_vsock *vsock =
+ int ret, in_sg = 0, out_sg = 0; + container_of(work, struct virtio_vsock, send_pkt_work);
+ struct virtqueue *vq; + struct virtqueue *vq;
+ DEFINE_WAIT(wait); + bool added = false;
+ bool restart_rx = false;
+
+ mutex_lock(&vsock->tx_lock);
+ +
+ vq = vsock->vqs[VSOCK_VQ_TX]; + vq = vsock->vqs[VSOCK_VQ_TX];
+ +
+ /* Put pkt in the virtqueue */ + /* Avoid unnecessary interrupts while we're processing the ring */
+ sg_init_one(&hdr, &pkt->hdr, sizeof(pkt->hdr)); + virtqueue_disable_cb(vq);
+ sgs[out_sg++] = &hdr; +
+ if (pkt->buf) { + for (;;) {
+ sg_init_one(&buf, pkt->buf, pkt->len); + struct virtio_vsock_pkt *pkt;
+ sgs[out_sg++] = &buf; + struct scatterlist hdr, buf, *sgs[2];
+ int ret, in_sg = 0, out_sg = 0;
+ bool reply;
+
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ if (list_empty(&vsock->send_pkt_list)) {
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+ virtqueue_enable_cb(vq);
+ break;
+ }
+
+ pkt = list_first_entry(&vsock->send_pkt_list,
+ struct virtio_vsock_pkt, list);
+ list_del_init(&pkt->list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ reply = pkt->reply;
+
+ sg_init_one(&hdr, &pkt->hdr, sizeof(pkt->hdr));
+ sgs[out_sg++] = &hdr;
+ if (pkt->buf) {
+ sg_init_one(&buf, pkt->buf, pkt->len);
+ sgs[out_sg++] = &buf;
+ }
+
+ ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, pkt, GFP_KERNEL);
+ if (ret < 0) {
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ list_add(&pkt->list, &vsock->send_pkt_list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ if (!virtqueue_enable_cb(vq) && ret == -ENOSPC)
+ continue; /* retry now that we have more space */
+ break;
+ }
+
+ if (reply) {
+ struct virtqueue *rx_vq = vsock->vqs[VSOCK_VQ_RX];
+ int val;
+
+ val = atomic_dec_return(&vsock->queued_replies);
+
+ /* Do we now have resources to resume rx processing? */
+ if (val + 1 == virtqueue_get_vring_size(rx_vq))
+ restart_rx = true;
+ }
+
+ added = true;
+ } + }
+ +
+ mutex_lock(&vsock->tx_lock); + if (added)
+ while ((ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, pkt, + virtqueue_kick(vq);
+ GFP_KERNEL)) < 0) { +
+ prepare_to_wait_exclusive(&vsock->tx_wait, &wait,
+ TASK_UNINTERRUPTIBLE);
+ mutex_unlock(&vsock->tx_lock);
+ schedule();
+ mutex_lock(&vsock->tx_lock);
+ finish_wait(&vsock->tx_wait, &wait);
+ }
+ virtqueue_kick(vq);
+ mutex_unlock(&vsock->tx_lock); + mutex_unlock(&vsock->tx_lock);
+ +
+ return pkt->len; + if (restart_rx)
+ queue_work(virtio_vsock_workqueue, &vsock->rx_work);
+} +}
+ +
+static int +static int
+virtio_transport_send_pkt_no_sock(struct virtio_vsock_pkt *pkt) +virtio_transport_send_pkt(struct virtio_vsock_pkt *pkt)
+{ +{
+ struct virtio_vsock *vsock; + struct virtio_vsock *vsock;
+ int len = pkt->len;
+ +
+ vsock = virtio_vsock_get(); + vsock = virtio_vsock_get();
+ if (!vsock) { + if (!vsock) {
@@ -151,71 +198,15 @@ index 0000000..45472e0
+ return -ENODEV; + return -ENODEV;
+ } + }
+ +
+ return virtio_transport_send_one_pkt(vsock, pkt); + if (pkt->reply)
+} + atomic_inc(&vsock->queued_replies);
+ +
+static int + spin_lock_bh(&vsock->send_pkt_list_lock);
+virtio_transport_send_pkt(struct vsock_sock *vsk, + list_add_tail(&pkt->list, &vsock->send_pkt_list);
+ struct virtio_vsock_pkt_info *info) + spin_unlock_bh(&vsock->send_pkt_list_lock);
+{
+ u32 src_cid, src_port, dst_cid, dst_port;
+ struct virtio_vsock_sock *vvs;
+ struct virtio_vsock_pkt *pkt;
+ struct virtio_vsock *vsock;
+ u32 pkt_len = info->pkt_len;
+ DEFINE_WAIT(wait);
+ +
+ vsock = virtio_vsock_get(); + queue_work(virtio_vsock_workqueue, &vsock->send_pkt_work);
+ if (!vsock) + return len;
+ return -ENODEV;
+
+ src_cid = virtio_transport_get_local_cid();
+ src_port = vsk->local_addr.svm_port;
+ if (!info->remote_cid) {
+ dst_cid = vsk->remote_addr.svm_cid;
+ dst_port = vsk->remote_addr.svm_port;
+ } else {
+ dst_cid = info->remote_cid;
+ dst_port = info->remote_port;
+ }
+
+ vvs = vsk->trans;
+
+ if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE)
+ pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
+ pkt_len = virtio_transport_get_credit(vvs, pkt_len);
+ /* Do not send zero length OP_RW pkt*/
+ if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW)
+ return pkt_len;
+
+ /* Respect global tx buf limitation */
+ mutex_lock(&vsock->tx_lock);
+ while (pkt_len + vsock->total_tx_buf > VIRTIO_VSOCK_MAX_TX_BUF_SIZE) {
+ prepare_to_wait_exclusive(&vsock->tx_wait, &wait,
+ TASK_UNINTERRUPTIBLE);
+ mutex_unlock(&vsock->tx_lock);
+ schedule();
+ mutex_lock(&vsock->tx_lock);
+ finish_wait(&vsock->tx_wait, &wait);
+ }
+ vsock->total_tx_buf += pkt_len;
+ mutex_unlock(&vsock->tx_lock);
+
+ pkt = virtio_transport_alloc_pkt(info, pkt_len,
+ src_cid, src_port,
+ dst_cid, dst_port);
+ if (!pkt) {
+ mutex_lock(&vsock->tx_lock);
+ vsock->total_tx_buf -= pkt_len;
+ mutex_unlock(&vsock->tx_lock);
+ virtio_transport_put_credit(vvs, pkt_len);
+ wake_up(&vsock->tx_wait);
+ return -ENOMEM;
+ }
+
+ virtio_transport_inc_tx_pkt(vvs, pkt);
+
+ return virtio_transport_send_one_pkt(vsock, pkt);
+} +}
+ +
+static void virtio_vsock_rx_fill(struct virtio_vsock *vsock) +static void virtio_vsock_rx_fill(struct virtio_vsock *vsock)
@@ -258,7 +249,7 @@ index 0000000..45472e0
+ virtqueue_kick(vq); + virtqueue_kick(vq);
+} +}
+ +
+static void virtio_transport_send_pkt_work(struct work_struct *work) +static void virtio_transport_tx_work(struct work_struct *work)
+{ +{
+ struct virtio_vsock *vsock = + struct virtio_vsock *vsock =
+ container_of(work, struct virtio_vsock, tx_work); + container_of(work, struct virtio_vsock, tx_work);
@@ -273,7 +264,6 @@ index 0000000..45472e0
+ +
+ virtqueue_disable_cb(vq); + virtqueue_disable_cb(vq);
+ while ((pkt = virtqueue_get_buf(vq, &len)) != NULL) { + while ((pkt = virtqueue_get_buf(vq, &len)) != NULL) {
+ vsock->total_tx_buf -= pkt->len;
+ virtio_transport_free_pkt(pkt); + virtio_transport_free_pkt(pkt);
+ added = true; + added = true;
+ } + }
@@ -281,23 +271,50 @@ index 0000000..45472e0
+ mutex_unlock(&vsock->tx_lock); + mutex_unlock(&vsock->tx_lock);
+ +
+ if (added) + if (added)
+ wake_up(&vsock->tx_wait); + queue_work(virtio_vsock_workqueue, &vsock->send_pkt_work);
+} +}
+ +
+static void virtio_transport_recv_pkt_work(struct work_struct *work) +/* Is there space left for replies to rx packets? */
+static bool virtio_transport_more_replies(struct virtio_vsock *vsock)
+{
+ struct virtqueue *vq = vsock->vqs[VSOCK_VQ_RX];
+ int val;
+
+ smp_rmb(); /* paired with atomic_inc() and atomic_dec_return() */
+ val = atomic_read(&vsock->queued_replies);
+
+ return val < virtqueue_get_vring_size(vq);
+}
+
+static void virtio_transport_rx_work(struct work_struct *work)
+{ +{
+ struct virtio_vsock *vsock = + struct virtio_vsock *vsock =
+ container_of(work, struct virtio_vsock, rx_work); + container_of(work, struct virtio_vsock, rx_work);
+ struct virtqueue *vq; + struct virtqueue *vq;
+ +
+ vq = vsock->vqs[VSOCK_VQ_RX]; + vq = vsock->vqs[VSOCK_VQ_RX];
+ mutex_lock(&vsock->rx_lock);
+ do {
+ struct virtio_vsock_pkt *pkt;
+ unsigned int len;
+ +
+ mutex_lock(&vsock->rx_lock);
+
+ do {
+ virtqueue_disable_cb(vq); + virtqueue_disable_cb(vq);
+ while ((pkt = virtqueue_get_buf(vq, &len)) != NULL) { + for (;;) {
+ struct virtio_vsock_pkt *pkt;
+ unsigned int len;
+
+ if (!virtio_transport_more_replies(vsock)) {
+ /* Stop rx until the device processes already
+ * pending replies. Leave rx virtqueue
+ * callbacks disabled.
+ */
+ goto out;
+ }
+
+ pkt = virtqueue_get_buf(vq, &len);
+ if (!pkt) {
+ break;
+ }
+
+ vsock->rx_buf_nr--; + vsock->rx_buf_nr--;
+ +
+ /* Drop short/long packets */ + /* Drop short/long packets */
@@ -312,6 +329,7 @@ index 0000000..45472e0
+ } + }
+ } while (!virtqueue_enable_cb(vq)); + } while (!virtqueue_enable_cb(vq));
+ +
+out:
+ if (vsock->rx_buf_nr < vsock->rx_buf_max_nr / 2) + if (vsock->rx_buf_nr < vsock->rx_buf_max_nr / 2)
+ virtio_vsock_rx_fill(vsock); + virtio_vsock_rx_fill(vsock);
+ mutex_unlock(&vsock->rx_lock); + mutex_unlock(&vsock->rx_lock);
@@ -357,11 +375,11 @@ index 0000000..45472e0
+static void virtio_vsock_update_guest_cid(struct virtio_vsock *vsock) +static void virtio_vsock_update_guest_cid(struct virtio_vsock *vsock)
+{ +{
+ struct virtio_device *vdev = vsock->vdev; + struct virtio_device *vdev = vsock->vdev;
+ u32 guest_cid; + u64 guest_cid;
+ +
+ vdev->config->get(vdev, offsetof(struct virtio_vsock_config, guest_cid), + vdev->config->get(vdev, offsetof(struct virtio_vsock_config, guest_cid),
+ &guest_cid, sizeof(guest_cid)); + &guest_cid, sizeof(guest_cid));
+ vsock->guest_cid = le32_to_cpu(guest_cid); + vsock->guest_cid = le64_to_cpu(guest_cid);
+} +}
+ +
+/* event_lock must be held */ +/* event_lock must be held */
@@ -473,8 +491,7 @@ index 0000000..45472e0
+ .get_max_buffer_size = virtio_transport_get_max_buffer_size, + .get_max_buffer_size = virtio_transport_get_max_buffer_size,
+ }, + },
+ +
+ .send_pkt = virtio_transport_send_pkt, + .send_pkt = virtio_transport_send_pkt,
+ .send_pkt_no_sock = virtio_transport_send_pkt_no_sock,
+}; +};
+ +
+static int virtio_vsock_probe(struct virtio_device *vdev) +static int virtio_vsock_probe(struct virtio_device *vdev)
@@ -523,16 +540,19 @@ index 0000000..45472e0
+ +
+ vsock->rx_buf_nr = 0; + vsock->rx_buf_nr = 0;
+ vsock->rx_buf_max_nr = 0; + vsock->rx_buf_max_nr = 0;
+ atomic_set(&vsock->queued_replies, 0);
+ +
+ vdev->priv = vsock; + vdev->priv = vsock;
+ the_virtio_vsock = vsock; + the_virtio_vsock = vsock;
+ init_waitqueue_head(&vsock->tx_wait);
+ mutex_init(&vsock->tx_lock); + mutex_init(&vsock->tx_lock);
+ mutex_init(&vsock->rx_lock); + mutex_init(&vsock->rx_lock);
+ mutex_init(&vsock->event_lock); + mutex_init(&vsock->event_lock);
+ INIT_WORK(&vsock->rx_work, virtio_transport_recv_pkt_work); + spin_lock_init(&vsock->send_pkt_list_lock);
+ INIT_WORK(&vsock->tx_work, virtio_transport_send_pkt_work); + INIT_LIST_HEAD(&vsock->send_pkt_list);
+ INIT_WORK(&vsock->rx_work, virtio_transport_rx_work);
+ INIT_WORK(&vsock->tx_work, virtio_transport_tx_work);
+ INIT_WORK(&vsock->event_work, virtio_transport_event_work); + INIT_WORK(&vsock->event_work, virtio_transport_event_work);
+ INIT_WORK(&vsock->send_pkt_work, virtio_transport_send_pkt_work);
+ +
+ mutex_lock(&vsock->rx_lock); + mutex_lock(&vsock->rx_lock);
+ virtio_vsock_rx_fill(vsock); + virtio_vsock_rx_fill(vsock);
@@ -556,13 +576,34 @@ index 0000000..45472e0
+static void virtio_vsock_remove(struct virtio_device *vdev) +static void virtio_vsock_remove(struct virtio_device *vdev)
+{ +{
+ struct virtio_vsock *vsock = vdev->priv; + struct virtio_vsock *vsock = vdev->priv;
+ struct virtio_vsock_pkt *pkt;
+ +
+ flush_work(&vsock->rx_work); + flush_work(&vsock->rx_work);
+ flush_work(&vsock->tx_work); + flush_work(&vsock->tx_work);
+ flush_work(&vsock->event_work); + flush_work(&vsock->event_work);
+ flush_work(&vsock->send_pkt_work);
+ +
+ vdev->config->reset(vdev); + vdev->config->reset(vdev);
+ +
+ mutex_lock(&vsock->rx_lock);
+ while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_RX])))
+ virtio_transport_free_pkt(pkt);
+ mutex_unlock(&vsock->rx_lock);
+
+ mutex_lock(&vsock->tx_lock);
+ while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_TX])))
+ virtio_transport_free_pkt(pkt);
+ mutex_unlock(&vsock->tx_lock);
+
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ while (!list_empty(&vsock->send_pkt_list)) {
+ pkt = list_first_entry(&vsock->send_pkt_list,
+ struct virtio_vsock_pkt, list);
+ list_del(&pkt->list);
+ virtio_transport_free_pkt(pkt);
+ }
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ mutex_lock(&the_virtio_vsock_mutex); + mutex_lock(&the_virtio_vsock_mutex);
+ the_virtio_vsock = NULL; + the_virtio_vsock = NULL;
+ vsock_core_exit(); + vsock_core_exit();
@@ -617,5 +658,5 @@ index 0000000..45472e0
+MODULE_DESCRIPTION("virtio transport for vsock"); +MODULE_DESCRIPTION("virtio transport for vsock");
+MODULE_DEVICE_TABLE(virtio, id_table); +MODULE_DEVICE_TABLE(virtio, id_table);
-- --
2.9.0 2.9.3

View File

@@ -1,26 +1,26 @@
From b83fd98b14a95b6ae4b5eee5c565e645aee41442 Mon Sep 17 00:00:00 2001 From 007cc7ab2d6af1c213b5d5e8fc0cd651bad8f486 Mon Sep 17 00:00:00 2001
From: Asias He <asias@redhat.com> From: Asias He <asias@redhat.com>
Date: Thu, 13 Jun 2013 18:29:21 +0800 Date: Thu, 28 Jul 2016 15:36:34 +0100
Subject: [PATCH 07/40] VSOCK: Introduce vhost_vsock.ko Subject: [PATCH 09/45] VSOCK: Introduce vhost_vsock.ko
VM sockets vhost transport implementation. This driver runs on the VM sockets vhost transport implementation. This driver runs on the
host. host.
Signed-off-by: Asias He <asias@redhat.com> Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(from RFC v6 <1469716595-13591-6-git-send-email-stefanha@redhat.com>)
--- ---
MAINTAINERS | 2 + MAINTAINERS | 2 +
drivers/vhost/vsock.c | 694 ++++++++++++++++++++++++++++++++++++++++++++++++++ drivers/vhost/vsock.c | 722 +++++++++++++++++++++++++++++++++++++++++++++
drivers/vhost/vsock.h | 5 + include/uapi/linux/vhost.h | 5 +
3 files changed, 701 insertions(+) 3 files changed, 729 insertions(+)
create mode 100644 drivers/vhost/vsock.c create mode 100644 drivers/vhost/vsock.c
create mode 100644 drivers/vhost/vsock.h
diff --git a/MAINTAINERS b/MAINTAINERS diff --git a/MAINTAINERS b/MAINTAINERS
index 82d1123..12d49f5 100644 index c7e4c9a..fa94182 100644
--- a/MAINTAINERS --- a/MAINTAINERS
+++ b/MAINTAINERS +++ b/MAINTAINERS
@@ -11392,6 +11392,8 @@ F: include/linux/virtio_vsock.h @@ -11405,6 +11405,8 @@ F: include/linux/virtio_vsock.h
F: include/uapi/linux/virtio_vsock.h F: include/uapi/linux/virtio_vsock.h
F: net/vmw_vsock/virtio_transport_common.c F: net/vmw_vsock/virtio_transport_common.c
F: net/vmw_vsock/virtio_transport.c F: net/vmw_vsock/virtio_transport.c
@@ -31,10 +31,10 @@ index 82d1123..12d49f5 100644
M: Stephen Chandler Paul <thatslyude@gmail.com> M: Stephen Chandler Paul <thatslyude@gmail.com>
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
new file mode 100644 new file mode 100644
index 0000000..8488d01 index 0000000..028ca16
--- /dev/null --- /dev/null
+++ b/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c
@@ -0,0 +1,694 @@ @@ -0,0 +1,722 @@
+/* +/*
+ * vhost transport for vsock + * vhost transport for vsock
+ * + *
@@ -45,15 +45,16 @@ index 0000000..8488d01
+ * This work is licensed under the terms of the GNU GPL, version 2. + * This work is licensed under the terms of the GNU GPL, version 2.
+ */ + */
+#include <linux/miscdevice.h> +#include <linux/miscdevice.h>
+#include <linux/atomic.h>
+#include <linux/module.h> +#include <linux/module.h>
+#include <linux/mutex.h> +#include <linux/mutex.h>
+#include <linux/vmalloc.h>
+#include <net/sock.h> +#include <net/sock.h>
+#include <linux/virtio_vsock.h> +#include <linux/virtio_vsock.h>
+#include <linux/vhost.h> +#include <linux/vhost.h>
+ +
+#include <net/af_vsock.h> +#include <net/af_vsock.h>
+#include "vhost.h" +#include "vhost.h"
+#include "vsock.h"
+ +
+#define VHOST_VSOCK_DEFAULT_HOST_CID 2 +#define VHOST_VSOCK_DEFAULT_HOST_CID 2
+ +
@@ -62,22 +63,21 @@ index 0000000..8488d01
+}; +};
+ +
+/* Used to track all the vhost_vsock instances on the system. */ +/* Used to track all the vhost_vsock instances on the system. */
+static DEFINE_SPINLOCK(vhost_vsock_lock);
+static LIST_HEAD(vhost_vsock_list); +static LIST_HEAD(vhost_vsock_list);
+static DEFINE_MUTEX(vhost_vsock_mutex);
+ +
+struct vhost_vsock { +struct vhost_vsock {
+ struct vhost_dev dev; + struct vhost_dev dev;
+ struct vhost_virtqueue vqs[2]; + struct vhost_virtqueue vqs[2];
+ +
+ /* Link to global vhost_vsock_list, protected by vhost_vsock_mutex */ + /* Link to global vhost_vsock_list, protected by vhost_vsock_lock */
+ struct list_head list; + struct list_head list;
+ +
+ struct vhost_work send_pkt_work; + struct vhost_work send_pkt_work;
+ wait_queue_head_t send_wait; + spinlock_t send_pkt_list_lock;
+
+ /* Fields protected by vqs[VSOCK_VQ_RX].mutex */
+ struct list_head send_pkt_list; /* host->guest pending packets */ + struct list_head send_pkt_list; /* host->guest pending packets */
+ u32 total_tx_buf; +
+ atomic_t queued_replies;
+ +
+ u32 guest_cid; + u32 guest_cid;
+}; +};
@@ -91,7 +91,7 @@ index 0000000..8488d01
+{ +{
+ struct vhost_vsock *vsock; + struct vhost_vsock *vsock;
+ +
+ mutex_lock(&vhost_vsock_mutex); + spin_lock_bh(&vhost_vsock_lock);
+ list_for_each_entry(vsock, &vhost_vsock_list, list) { + list_for_each_entry(vsock, &vhost_vsock_list, list) {
+ u32 other_cid = vsock->guest_cid; + u32 other_cid = vsock->guest_cid;
+ +
@@ -100,11 +100,11 @@ index 0000000..8488d01
+ continue; + continue;
+ +
+ if (other_cid == guest_cid) { + if (other_cid == guest_cid) {
+ mutex_unlock(&vhost_vsock_mutex); + spin_unlock_bh(&vhost_vsock_lock);
+ return vsock; + return vsock;
+ } + }
+ } + }
+ mutex_unlock(&vhost_vsock_mutex); + spin_unlock_bh(&vhost_vsock_lock);
+ +
+ return NULL; + return NULL;
+} +}
@@ -113,10 +113,15 @@ index 0000000..8488d01
+vhost_transport_do_send_pkt(struct vhost_vsock *vsock, +vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
+ struct vhost_virtqueue *vq) + struct vhost_virtqueue *vq)
+{ +{
+ struct vhost_virtqueue *tx_vq = &vsock->vqs[VSOCK_VQ_TX];
+ bool added = false; + bool added = false;
+ bool restart_tx = false;
+ +
+ mutex_lock(&vq->mutex); + mutex_lock(&vq->mutex);
+ +
+ if (!vq->private_data)
+ goto out;
+
+ /* Avoid further vmexits, we're already processing the virtqueue */ + /* Avoid further vmexits, we're already processing the virtqueue */
+ vhost_disable_notify(&vsock->dev, vq); + vhost_disable_notify(&vsock->dev, vq);
+ +
@@ -128,17 +133,32 @@ index 0000000..8488d01
+ size_t len; + size_t len;
+ int head; + int head;
+ +
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ if (list_empty(&vsock->send_pkt_list)) { + if (list_empty(&vsock->send_pkt_list)) {
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+ vhost_enable_notify(&vsock->dev, vq); + vhost_enable_notify(&vsock->dev, vq);
+ break; + break;
+ } + }
+ +
+ pkt = list_first_entry(&vsock->send_pkt_list,
+ struct virtio_vsock_pkt, list);
+ list_del_init(&pkt->list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), + head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+ &out, &in, NULL, NULL); + &out, &in, NULL, NULL);
+ if (head < 0) + if (head < 0) {
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ list_add(&pkt->list, &vsock->send_pkt_list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+ break; + break;
+ }
+ +
+ if (head == vq->num) { + if (head == vq->num) {
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ list_add(&pkt->list, &vsock->send_pkt_list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ /* We cannot finish yet if more buffers snuck in while + /* We cannot finish yet if more buffers snuck in while
+ * re-enabling notify. + * re-enabling notify.
+ */ + */
@@ -149,10 +169,6 @@ index 0000000..8488d01
+ break; + break;
+ } + }
+ +
+ pkt = list_first_entry(&vsock->send_pkt_list,
+ struct virtio_vsock_pkt, list);
+ list_del_init(&pkt->list);
+
+ if (out) { + if (out) {
+ virtio_transport_free_pkt(pkt); + virtio_transport_free_pkt(pkt);
+ vq_err(vq, "Expected 0 output buffers, got %u\n", out); + vq_err(vq, "Expected 0 output buffers, got %u\n", out);
@@ -179,16 +195,26 @@ index 0000000..8488d01
+ vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len); + vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len);
+ added = true; + added = true;
+ +
+ vsock->total_tx_buf -= pkt->len; + if (pkt->reply) {
+ int val;
+
+ val = atomic_dec_return(&vsock->queued_replies);
+
+ /* Do we have resources to resume tx processing? */
+ if (val + 1 == tx_vq->num)
+ restart_tx = true;
+ }
+ +
+ virtio_transport_free_pkt(pkt); + virtio_transport_free_pkt(pkt);
+ } + }
+ if (added) + if (added)
+ vhost_signal(&vsock->dev, vq); + vhost_signal(&vsock->dev, vq);
+
+out:
+ mutex_unlock(&vq->mutex); + mutex_unlock(&vq->mutex);
+ +
+ if (added) + if (restart_tx)
+ wake_up(&vsock->send_wait); + vhost_poll_queue(&tx_vq->poll);
+} +}
+ +
+static void vhost_transport_send_pkt_work(struct vhost_work *work) +static void vhost_transport_send_pkt_work(struct vhost_work *work)
@@ -203,104 +229,30 @@ index 0000000..8488d01
+} +}
+ +
+static int +static int
+vhost_transport_send_one_pkt(struct vhost_vsock *vsock, +vhost_transport_send_pkt(struct virtio_vsock_pkt *pkt)
+ struct virtio_vsock_pkt *pkt)
+{
+ struct vhost_virtqueue *vq = &vsock->vqs[VSOCK_VQ_RX];
+
+ /* Queue it up in vhost work */
+ mutex_lock(&vq->mutex);
+ list_add_tail(&pkt->list, &vsock->send_pkt_list);
+ vhost_work_queue(&vsock->dev, &vsock->send_pkt_work);
+ mutex_unlock(&vq->mutex);
+
+ return pkt->len;
+}
+
+static int
+vhost_transport_send_pkt_no_sock(struct virtio_vsock_pkt *pkt)
+{ +{
+ struct vhost_vsock *vsock; + struct vhost_vsock *vsock;
+ struct vhost_virtqueue *vq;
+ int len = pkt->len;
+ +
+ /* Find the vhost_vsock according to guest context id */ + /* Find the vhost_vsock according to guest context id */
+ vsock = vhost_vsock_get(le32_to_cpu(pkt->hdr.dst_cid)); + vsock = vhost_vsock_get(le64_to_cpu(pkt->hdr.dst_cid));
+ if (!vsock) { + if (!vsock) {
+ virtio_transport_free_pkt(pkt); + virtio_transport_free_pkt(pkt);
+ return -ENODEV; + return -ENODEV;
+ } + }
+ +
+ return vhost_transport_send_one_pkt(vsock, pkt);
+}
+
+static int
+vhost_transport_send_pkt(struct vsock_sock *vsk,
+ struct virtio_vsock_pkt_info *info)
+{
+ u32 src_cid, src_port, dst_cid, dst_port;
+ struct virtio_vsock_sock *vvs;
+ struct virtio_vsock_pkt *pkt;
+ struct vhost_virtqueue *vq;
+ struct vhost_vsock *vsock;
+ u32 pkt_len = info->pkt_len;
+ DEFINE_WAIT(wait);
+
+ src_cid = vhost_transport_get_local_cid();
+ src_port = vsk->local_addr.svm_port;
+ if (!info->remote_cid) {
+ dst_cid = vsk->remote_addr.svm_cid;
+ dst_port = vsk->remote_addr.svm_port;
+ } else {
+ dst_cid = info->remote_cid;
+ dst_port = info->remote_port;
+ }
+
+ /* Find the vhost_vsock according to guest context id */
+ vsock = vhost_vsock_get(dst_cid);
+ if (!vsock)
+ return -ENODEV;
+
+ vvs = vsk->trans;
+ vq = &vsock->vqs[VSOCK_VQ_RX]; + vq = &vsock->vqs[VSOCK_VQ_RX];
+ +
+ /* we can send less than pkt_len bytes */ + if (pkt->reply)
+ if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE) + atomic_inc(&vsock->queued_replies);
+ pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
+ +
+ /* virtio_transport_get_credit might return less than pkt_len credit */ + spin_lock_bh(&vsock->send_pkt_list_lock);
+ pkt_len = virtio_transport_get_credit(vvs, pkt_len); + list_add_tail(&pkt->list, &vsock->send_pkt_list);
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+ +
+ /* Do not send zero length OP_RW pkt*/ + vhost_work_queue(&vsock->dev, &vsock->send_pkt_work);
+ if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW) + return len;
+ return pkt_len;
+
+ /* Respect global tx buf limitation */
+ mutex_lock(&vq->mutex);
+ while (pkt_len + vsock->total_tx_buf > VIRTIO_VSOCK_MAX_TX_BUF_SIZE) {
+ prepare_to_wait_exclusive(&vsock->send_wait, &wait,
+ TASK_UNINTERRUPTIBLE);
+ mutex_unlock(&vq->mutex);
+ schedule();
+ mutex_lock(&vq->mutex);
+ finish_wait(&vsock->send_wait, &wait);
+ }
+ vsock->total_tx_buf += pkt_len;
+ mutex_unlock(&vq->mutex);
+
+ pkt = virtio_transport_alloc_pkt(info, pkt_len,
+ src_cid, src_port,
+ dst_cid, dst_port);
+ if (!pkt) {
+ mutex_lock(&vq->mutex);
+ vsock->total_tx_buf -= pkt_len;
+ mutex_unlock(&vq->mutex);
+ virtio_transport_put_credit(vvs, pkt_len);
+ wake_up(&vsock->send_wait);
+ return -ENOMEM;
+ }
+
+ virtio_transport_inc_tx_pkt(vvs, pkt);
+
+ return vhost_transport_send_one_pkt(vsock, pkt);
+} +}
+ +
+static struct virtio_vsock_pkt * +static struct virtio_vsock_pkt *
@@ -362,6 +314,18 @@ index 0000000..8488d01
+ return pkt; + return pkt;
+} +}
+ +
+/* Is there space left for replies to rx packets? */
+static bool vhost_vsock_more_replies(struct vhost_vsock *vsock)
+{
+ struct vhost_virtqueue *vq = &vsock->vqs[VSOCK_VQ_TX];
+ int val;
+
+ smp_rmb(); /* paired with atomic_inc() and atomic_dec_return() */
+ val = atomic_read(&vsock->queued_replies);
+
+ return val < vq->num;
+}
+
+static void vhost_vsock_handle_tx_kick(struct vhost_work *work) +static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
+{ +{
+ struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue, + struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
@@ -374,8 +338,20 @@ index 0000000..8488d01
+ bool added = false; + bool added = false;
+ +
+ mutex_lock(&vq->mutex); + mutex_lock(&vq->mutex);
+
+ if (!vq->private_data)
+ goto out;
+
+ vhost_disable_notify(&vsock->dev, vq); + vhost_disable_notify(&vsock->dev, vq);
+ for (;;) { + for (;;) {
+ if (!vhost_vsock_more_replies(vsock)) {
+ /* Stop tx until the device processes already
+ * pending replies. Leave tx virtqueue
+ * callbacks disabled.
+ */
+ goto no_more_replies;
+ }
+
+ head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), + head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+ &out, &in, NULL, NULL); + &out, &in, NULL, NULL);
+ if (head < 0) + if (head < 0)
@@ -396,7 +372,7 @@ index 0000000..8488d01
+ } + }
+ +
+ /* Only accept correctly addressed packets */ + /* Only accept correctly addressed packets */
+ if (le32_to_cpu(pkt->hdr.src_cid) == vsock->guest_cid) + if (le64_to_cpu(pkt->hdr.src_cid) == vsock->guest_cid)
+ virtio_transport_recv_pkt(pkt); + virtio_transport_recv_pkt(pkt);
+ else + else
+ virtio_transport_free_pkt(pkt); + virtio_transport_free_pkt(pkt);
@@ -404,8 +380,12 @@ index 0000000..8488d01
+ vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len); + vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len);
+ added = true; + added = true;
+ } + }
+
+no_more_replies:
+ if (added) + if (added)
+ vhost_signal(&vsock->dev, vq); + vhost_signal(&vsock->dev, vq);
+
+out:
+ mutex_unlock(&vq->mutex); + mutex_unlock(&vq->mutex);
+} +}
+ +
@@ -465,21 +445,36 @@ index 0000000..8488d01
+ return ret; + return ret;
+} +}
+ +
+static void vhost_vsock_stop(struct vhost_vsock *vsock) +static int vhost_vsock_stop(struct vhost_vsock *vsock)
+{ +{
+ size_t i; + size_t i;
+ int ret;
+ +
+ mutex_lock(&vsock->dev.mutex); + mutex_lock(&vsock->dev.mutex);
+ +
+ ret = vhost_dev_check_owner(&vsock->dev);
+ if (ret)
+ goto err;
+
+ for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { + for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) {
+ struct vhost_virtqueue *vq = &vsock->vqs[i]; + struct vhost_virtqueue *vq = &vsock->vqs[i];
+ +
+ mutex_lock(&vq->mutex); + mutex_lock(&vq->mutex);
+ vq->private_data = vsock; + vq->private_data = NULL;
+ mutex_unlock(&vq->mutex); + mutex_unlock(&vq->mutex);
+ } + }
+ +
+err:
+ mutex_unlock(&vsock->dev.mutex); + mutex_unlock(&vsock->dev.mutex);
+ return ret;
+}
+
+static void vhost_vsock_free(struct vhost_vsock *vsock)
+{
+ if (is_vmalloc_addr(vsock))
+ vfree(vsock);
+ else
+ kfree(vsock);
+} +}
+ +
+static int vhost_vsock_dev_open(struct inode *inode, struct file *file) +static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
@@ -488,9 +483,15 @@ index 0000000..8488d01
+ struct vhost_vsock *vsock; + struct vhost_vsock *vsock;
+ int ret; + int ret;
+ +
+ vsock = kzalloc(sizeof(*vsock), GFP_KERNEL); + /* This struct is large and allocation could fail, fall back to vmalloc
+ if (!vsock) + * if there is no other way.
+ return -ENOMEM; + */
+ vsock = kzalloc(sizeof(*vsock), GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
+ if (!vsock) {
+ vsock = vmalloc(sizeof(*vsock));
+ if (!vsock)
+ return -ENOMEM;
+ }
+ +
+ vqs = kmalloc_array(ARRAY_SIZE(vsock->vqs), sizeof(*vqs), GFP_KERNEL); + vqs = kmalloc_array(ARRAY_SIZE(vsock->vqs), sizeof(*vqs), GFP_KERNEL);
+ if (!vqs) { + if (!vqs) {
@@ -498,6 +499,8 @@ index 0000000..8488d01
+ goto out; + goto out;
+ } + }
+ +
+ atomic_set(&vsock->queued_replies, 0);
+
+ vqs[VSOCK_VQ_TX] = &vsock->vqs[VSOCK_VQ_TX]; + vqs[VSOCK_VQ_TX] = &vsock->vqs[VSOCK_VQ_TX];
+ vqs[VSOCK_VQ_RX] = &vsock->vqs[VSOCK_VQ_RX]; + vqs[VSOCK_VQ_RX] = &vsock->vqs[VSOCK_VQ_RX];
+ vsock->vqs[VSOCK_VQ_TX].handle_kick = vhost_vsock_handle_tx_kick; + vsock->vqs[VSOCK_VQ_TX].handle_kick = vhost_vsock_handle_tx_kick;
@@ -506,17 +509,17 @@ index 0000000..8488d01
+ vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs)); + vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs));
+ +
+ file->private_data = vsock; + file->private_data = vsock;
+ init_waitqueue_head(&vsock->send_wait); + spin_lock_init(&vsock->send_pkt_list_lock);
+ INIT_LIST_HEAD(&vsock->send_pkt_list); + INIT_LIST_HEAD(&vsock->send_pkt_list);
+ vhost_work_init(&vsock->send_pkt_work, vhost_transport_send_pkt_work); + vhost_work_init(&vsock->send_pkt_work, vhost_transport_send_pkt_work);
+ +
+ mutex_lock(&vhost_vsock_mutex); + spin_lock_bh(&vhost_vsock_lock);
+ list_add_tail(&vsock->list, &vhost_vsock_list); + list_add_tail(&vsock->list, &vhost_vsock_list);
+ mutex_unlock(&vhost_vsock_mutex); + spin_unlock_bh(&vhost_vsock_lock);
+ return 0; + return 0;
+ +
+out: +out:
+ kfree(vsock); + vhost_vsock_free(vsock);
+ return ret; + return ret;
+} +}
+ +
@@ -534,22 +537,27 @@ index 0000000..8488d01
+{ +{
+ struct vsock_sock *vsk = vsock_sk(sk); + struct vsock_sock *vsk = vsock_sk(sk);
+ +
+ lock_sock(sk); + /* vmci_transport.c doesn't take sk_lock here either. At least we're
+ * under vsock_table_lock so the sock cannot disappear while we're
+ * executing.
+ */
+
+ if (!vhost_vsock_get(vsk->local_addr.svm_cid)) { + if (!vhost_vsock_get(vsk->local_addr.svm_cid)) {
+ sock_set_flag(sk, SOCK_DONE);
+ vsk->peer_shutdown = SHUTDOWN_MASK;
+ sk->sk_state = SS_UNCONNECTED; + sk->sk_state = SS_UNCONNECTED;
+ sk->sk_err = ECONNRESET; + sk->sk_err = ECONNRESET;
+ sk->sk_error_report(sk); + sk->sk_error_report(sk);
+ } + }
+ release_sock(sk);
+} +}
+ +
+static int vhost_vsock_dev_release(struct inode *inode, struct file *file) +static int vhost_vsock_dev_release(struct inode *inode, struct file *file)
+{ +{
+ struct vhost_vsock *vsock = file->private_data; + struct vhost_vsock *vsock = file->private_data;
+ +
+ mutex_lock(&vhost_vsock_mutex); + spin_lock_bh(&vhost_vsock_lock);
+ list_del(&vsock->list); + list_del(&vsock->list);
+ mutex_unlock(&vhost_vsock_mutex); + spin_unlock_bh(&vhost_vsock_lock);
+ +
+ /* Iterating over all connections for all CIDs to find orphans is + /* Iterating over all connections for all CIDs to find orphans is
+ * inefficient. Room for improvement here. */ + * inefficient. Room for improvement here. */
@@ -558,18 +566,35 @@ index 0000000..8488d01
+ vhost_vsock_stop(vsock); + vhost_vsock_stop(vsock);
+ vhost_vsock_flush(vsock); + vhost_vsock_flush(vsock);
+ vhost_dev_stop(&vsock->dev); + vhost_dev_stop(&vsock->dev);
+
+ spin_lock_bh(&vsock->send_pkt_list_lock);
+ while (!list_empty(&vsock->send_pkt_list)) {
+ struct virtio_vsock_pkt *pkt;
+
+ pkt = list_first_entry(&vsock->send_pkt_list,
+ struct virtio_vsock_pkt, list);
+ list_del_init(&pkt->list);
+ virtio_transport_free_pkt(pkt);
+ }
+ spin_unlock_bh(&vsock->send_pkt_list_lock);
+
+ vhost_dev_cleanup(&vsock->dev, false); + vhost_dev_cleanup(&vsock->dev, false);
+ kfree(vsock->dev.vqs); + kfree(vsock->dev.vqs);
+ kfree(vsock); + vhost_vsock_free(vsock);
+ return 0; + return 0;
+} +}
+ +
+static int vhost_vsock_set_cid(struct vhost_vsock *vsock, u32 guest_cid) +static int vhost_vsock_set_cid(struct vhost_vsock *vsock, u64 guest_cid)
+{ +{
+ struct vhost_vsock *other; + struct vhost_vsock *other;
+ +
+ /* Refuse reserved CIDs */ + /* Refuse reserved CIDs */
+ if (guest_cid <= VMADDR_CID_HOST) + if (guest_cid <= VMADDR_CID_HOST ||
+ guest_cid == U32_MAX)
+ return -EINVAL;
+
+ /* 64-bit CIDs are not yet supported */
+ if (guest_cid > U32_MAX)
+ return -EINVAL; + return -EINVAL;
+ +
+ /* Refuse if CID is already in use */ + /* Refuse if CID is already in use */
@@ -577,9 +602,9 @@ index 0000000..8488d01
+ if (other && other != vsock) + if (other && other != vsock)
+ return -EADDRINUSE; + return -EADDRINUSE;
+ +
+ mutex_lock(&vhost_vsock_mutex); + spin_lock_bh(&vhost_vsock_lock);
+ vsock->guest_cid = guest_cid; + vsock->guest_cid = guest_cid;
+ mutex_unlock(&vhost_vsock_mutex); + spin_unlock_bh(&vhost_vsock_lock);
+ +
+ return 0; + return 0;
+} +}
@@ -614,26 +639,30 @@ index 0000000..8488d01
+{ +{
+ struct vhost_vsock *vsock = f->private_data; + struct vhost_vsock *vsock = f->private_data;
+ void __user *argp = (void __user *)arg; + void __user *argp = (void __user *)arg;
+ u64 __user *featurep = argp; + u64 guest_cid;
+ u32 __user *cidp = argp;
+ u32 guest_cid;
+ u64 features; + u64 features;
+ int start;
+ int r; + int r;
+ +
+ switch (ioctl) { + switch (ioctl) {
+ case VHOST_VSOCK_SET_GUEST_CID: + case VHOST_VSOCK_SET_GUEST_CID:
+ if (get_user(guest_cid, cidp)) + if (copy_from_user(&guest_cid, argp, sizeof(guest_cid)))
+ return -EFAULT; + return -EFAULT;
+ return vhost_vsock_set_cid(vsock, guest_cid); + return vhost_vsock_set_cid(vsock, guest_cid);
+ case VHOST_VSOCK_START: + case VHOST_VSOCK_SET_RUNNING:
+ return vhost_vsock_start(vsock); + if (copy_from_user(&start, argp, sizeof(start)))
+ return -EFAULT;
+ if (start)
+ return vhost_vsock_start(vsock);
+ else
+ return vhost_vsock_stop(vsock);
+ case VHOST_GET_FEATURES: + case VHOST_GET_FEATURES:
+ features = VHOST_VSOCK_FEATURES; + features = VHOST_VSOCK_FEATURES;
+ if (copy_to_user(featurep, &features, sizeof(features))) + if (copy_to_user(argp, &features, sizeof(features)))
+ return -EFAULT; + return -EFAULT;
+ return 0; + return 0;
+ case VHOST_SET_FEATURES: + case VHOST_SET_FEATURES:
+ if (copy_from_user(&features, featurep, sizeof(features))) + if (copy_from_user(&features, argp, sizeof(features)))
+ return -EFAULT; + return -EFAULT;
+ return vhost_vsock_set_features(vsock, features); + return vhost_vsock_set_features(vsock, features);
+ default: + default:
@@ -705,7 +734,6 @@ index 0000000..8488d01
+ }, + },
+ +
+ .send_pkt = vhost_transport_send_pkt, + .send_pkt = vhost_transport_send_pkt,
+ .send_pkt_no_sock = vhost_transport_send_pkt_no_sock,
+}; +};
+ +
+static int __init vhost_vsock_init(void) +static int __init vhost_vsock_init(void)
@@ -729,17 +757,20 @@ index 0000000..8488d01
+MODULE_LICENSE("GPL v2"); +MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Asias He"); +MODULE_AUTHOR("Asias He");
+MODULE_DESCRIPTION("vhost transport for vsock "); +MODULE_DESCRIPTION("vhost transport for vsock ");
diff --git a/drivers/vhost/vsock.h b/drivers/vhost/vsock.h diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
new file mode 100644 index ab373191..b306476 100644
index 0000000..173f9fc --- a/include/uapi/linux/vhost.h
--- /dev/null +++ b/include/uapi/linux/vhost.h
+++ b/drivers/vhost/vsock.h @@ -169,4 +169,9 @@ struct vhost_scsi_target {
@@ -0,0 +1,5 @@ #define VHOST_SCSI_SET_EVENTS_MISSED _IOW(VHOST_VIRTIO, 0x43, __u32)
+#ifndef VHOST_VSOCK_H #define VHOST_SCSI_GET_EVENTS_MISSED _IOW(VHOST_VIRTIO, 0x44, __u32)
+#define VHOST_VSOCK_H
+#define VHOST_VSOCK_SET_GUEST_CID _IOW(VHOST_VIRTIO, 0x60, __u32) +/* VHOST_VSOCK specific defines */
+#define VHOST_VSOCK_START _IO(VHOST_VIRTIO, 0x61) +
+#endif +#define VHOST_VSOCK_SET_GUEST_CID _IOW(VHOST_VIRTIO, 0x60, __u64)
+#define VHOST_VSOCK_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)
+
#endif
-- --
2.9.0 2.9.3

View File

@@ -1,18 +1,19 @@
From 17871c8224feaa5cf4944f3f09800968a8f19589 Mon Sep 17 00:00:00 2001 From d8b94a29a8e7fee77250aaa1ab3b1f80ad6c8882 Mon Sep 17 00:00:00 2001
From: Asias He <asias@redhat.com> From: Asias He <asias@redhat.com>
Date: Thu, 13 Jun 2013 18:30:19 +0800 Date: Thu, 28 Jul 2016 15:36:35 +0100
Subject: [PATCH 08/40] VSOCK: Add Makefile and Kconfig Subject: [PATCH 10/45] VSOCK: Add Makefile and Kconfig
Enable virtio-vsock and vhost-vsock. Enable virtio-vsock and vhost-vsock.
Signed-off-by: Asias He <asias@redhat.com> Signed-off-by: Asias He <asias@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(from RFC v6 <1469716595-13591-7-git-send-email-stefanha@redhat.com>)
--- ---
drivers/vhost/Kconfig | 15 +++++++++++++++ drivers/vhost/Kconfig | 15 +++++++++++++++
drivers/vhost/Makefile | 4 ++++ drivers/vhost/Makefile | 4 ++++
net/vmw_vsock/Kconfig | 19 +++++++++++++++++++ net/vmw_vsock/Kconfig | 20 ++++++++++++++++++++
net/vmw_vsock/Makefile | 2 ++ net/vmw_vsock/Makefile | 6 ++++++
4 files changed, 40 insertions(+) 4 files changed, 45 insertions(+)
diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
index 533eaf0..d7aae9e 100644 index 533eaf0..d7aae9e 100644
@@ -55,10 +56,10 @@ index e0441c3..6b012b9 100644
+ +
obj-$(CONFIG_VHOST) += vhost.o obj-$(CONFIG_VHOST) += vhost.o
diff --git a/net/vmw_vsock/Kconfig b/net/vmw_vsock/Kconfig diff --git a/net/vmw_vsock/Kconfig b/net/vmw_vsock/Kconfig
index 14810ab..f27e74b 100644 index 14810ab..8831e7c 100644
--- a/net/vmw_vsock/Kconfig --- a/net/vmw_vsock/Kconfig
+++ b/net/vmw_vsock/Kconfig +++ b/net/vmw_vsock/Kconfig
@@ -26,3 +26,22 @@ config VMWARE_VMCI_VSOCKETS @@ -26,3 +26,23 @@ config VMWARE_VMCI_VSOCKETS
To compile this driver as a module, choose M here: the module To compile this driver as a module, choose M here: the module
will be called vmw_vsock_vmci_transport. If unsure, say N. will be called vmw_vsock_vmci_transport. If unsure, say N.
@@ -73,26 +74,33 @@ index 14810ab..f27e74b 100644
+ Enable this transport if your Virtual Machine host supports Virtual + Enable this transport if your Virtual Machine host supports Virtual
+ Sockets over virtio. + Sockets over virtio.
+ +
+ To compile this driver as a module, choose M here: the module + To compile this driver as a module, choose M here: the module will be
+ will be called virtio_vsock_transport. If unsure, say N. + called vmw_vsock_virtio_transport. If unsure, say N.
+ +
+config VIRTIO_VSOCKETS_COMMON +config VIRTIO_VSOCKETS_COMMON
+ tristate + tristate
+ ---help--- + help
+ This option is selected by any driver which needs to access + This option is selected by any driver which needs to access
+ the virtio_vsock. + the virtio_vsock. The module will be called
+ vmw_vsock_virtio_transport_common.
diff --git a/net/vmw_vsock/Makefile b/net/vmw_vsock/Makefile diff --git a/net/vmw_vsock/Makefile b/net/vmw_vsock/Makefile
index 2ce52d7..cf4c294 100644 index 2ce52d7..bc27c70 100644
--- a/net/vmw_vsock/Makefile --- a/net/vmw_vsock/Makefile
+++ b/net/vmw_vsock/Makefile +++ b/net/vmw_vsock/Makefile
@@ -1,5 +1,7 @@ @@ -1,7 +1,13 @@
obj-$(CONFIG_VSOCKETS) += vsock.o obj-$(CONFIG_VSOCKETS) += vsock.o
obj-$(CONFIG_VMWARE_VMCI_VSOCKETS) += vmw_vsock_vmci_transport.o obj-$(CONFIG_VMWARE_VMCI_VSOCKETS) += vmw_vsock_vmci_transport.o
+obj-$(CONFIG_VIRTIO_VSOCKETS) += virtio_transport.o +obj-$(CONFIG_VIRTIO_VSOCKETS) += vmw_vsock_virtio_transport.o
+obj-$(CONFIG_VIRTIO_VSOCKETS_COMMON) += virtio_transport_common.o +obj-$(CONFIG_VIRTIO_VSOCKETS_COMMON) += vmw_vsock_virtio_transport_common.o
vsock-y += af_vsock.o vsock_addr.o vsock-y += af_vsock.o vsock_addr.o
vmw_vsock_vmci_transport-y += vmci_transport.o vmci_transport_notify.o \
vmci_transport_notify_qstate.o
+
+vmw_vsock_virtio_transport-y += virtio_transport.o
+
+vmw_vsock_virtio_transport_common-y += virtio_transport_common.o
-- --
2.9.0 2.9.3

View File

@@ -0,0 +1,53 @@
From 4438a0966cd79254fcab325f7495130314f339cd Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Thu, 4 Aug 2016 14:52:53 +0100
Subject: [PATCH 11/45] vhost/vsock: fix vhost virtio_vsock_pkt use-after-free
Stash the packet length in a local variable before handing over
ownership of the packet to virtio_transport_recv_pkt() or
virtio_transport_free_pkt().
This patch solves the use-after-free since pkt is no longer guaranteed
to be alive.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 3fda5d6e580193fa005014355b3a61498f1b3ae0)
---
drivers/vhost/vsock.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 028ca16..9e10fb5 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -307,6 +307,8 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
vhost_disable_notify(&vsock->dev, vq);
for (;;) {
+ u32 len;
+
if (!vhost_vsock_more_replies(vsock)) {
/* Stop tx until the device processes already
* pending replies. Leave tx virtqueue
@@ -334,13 +336,15 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
continue;
}
+ len = pkt->len;
+
/* Only accept correctly addressed packets */
if (le64_to_cpu(pkt->hdr.src_cid) == vsock->guest_cid)
virtio_transport_recv_pkt(pkt);
else
virtio_transport_free_pkt(pkt);
- vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len);
+ vhost_add_used(vq, head, sizeof(pkt->hdr) + len);
added = true;
}
--
2.9.3

View File

@@ -0,0 +1,28 @@
From f829b4b9d6ef1fa1d2859f5eceb9001c772541c3 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Fri, 5 Aug 2016 13:52:09 +0100
Subject: [PATCH 12/45] virtio-vsock: fix include guard typo
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 28ad55578b8a76390d966b09da8c7fa3644f5140)
---
include/uapi/linux/virtio_vsock.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/uapi/linux/virtio_vsock.h b/include/uapi/linux/virtio_vsock.h
index 6b011c1..1d57ed3 100644
--- a/include/uapi/linux/virtio_vsock.h
+++ b/include/uapi/linux/virtio_vsock.h
@@ -32,7 +32,7 @@
*/
#ifndef _UAPI_LINUX_VIRTIO_VSOCK_H
-#define _UAPI_LINUX_VIRTIO_VOSCK_H
+#define _UAPI_LINUX_VIRTIO_VSOCK_H
#include <linux/types.h>
#include <linux/virtio_ids.h>
--
2.9.3

View File

@@ -0,0 +1,61 @@
From 1294bdb2bffedfa825a8572926ba90fa522746c2 Mon Sep 17 00:00:00 2001
From: Gerard Garcia <ggarcia@deic.uab.cat>
Date: Wed, 10 Aug 2016 17:24:34 +0200
Subject: [PATCH 13/45] vhost/vsock: drop space available check for TX vq
Remove unnecessary use of enable/disable callback notifications
and the incorrect more space available check.
The virtio_transport_tx_work handles when the TX virtqueue
has more buffers available.
Signed-off-by: Gerard Garcia <ggarcia@deic.uab.cat>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 21bc54fc0cdc31de72b57d2b3c79cf9c2b83cf39)
---
net/vmw_vsock/virtio_transport.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index 699dfab..936d7ee 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -87,9 +87,6 @@ virtio_transport_send_pkt_work(struct work_struct *work)
vq = vsock->vqs[VSOCK_VQ_TX];
- /* Avoid unnecessary interrupts while we're processing the ring */
- virtqueue_disable_cb(vq);
-
for (;;) {
struct virtio_vsock_pkt *pkt;
struct scatterlist hdr, buf, *sgs[2];
@@ -99,7 +96,6 @@ virtio_transport_send_pkt_work(struct work_struct *work)
spin_lock_bh(&vsock->send_pkt_list_lock);
if (list_empty(&vsock->send_pkt_list)) {
spin_unlock_bh(&vsock->send_pkt_list_lock);
- virtqueue_enable_cb(vq);
break;
}
@@ -118,13 +114,13 @@ virtio_transport_send_pkt_work(struct work_struct *work)
}
ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, pkt, GFP_KERNEL);
+ /* Usually this means that there is no more space available in
+ * the vq
+ */
if (ret < 0) {
spin_lock_bh(&vsock->send_pkt_list_lock);
list_add(&pkt->list, &vsock->send_pkt_list);
spin_unlock_bh(&vsock->send_pkt_list_lock);
-
- if (!virtqueue_enable_cb(vq) && ret == -ENOSPC)
- continue; /* retry now that we have more space */
break;
}
--
2.9.3

View File

@@ -1,7 +1,7 @@
From a8a0423ba3b9cc33e2c673890d917318be602145 Mon Sep 17 00:00:00 2001 From a39cc29b3cb531e73d4f03e64e12fc0de62f8d03 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@docker.com> From: Ian Campbell <ian.campbell@docker.com>
Date: Mon, 4 Apr 2016 14:50:10 +0100 Date: Mon, 4 Apr 2016 14:50:10 +0100
Subject: [PATCH 09/40] VSOCK: Only allow host network namespace to use Subject: [PATCH 14/45] VSOCK: Only allow host network namespace to use
AF_VSOCK. AF_VSOCK.
The VSOCK addressing schema does not really lend itself to simply creating an The VSOCK addressing schema does not really lend itself to simply creating an
@@ -13,10 +13,10 @@ Signed-off-by: Ian Campbell <ian.campbell@docker.com>
1 file changed, 3 insertions(+) 1 file changed, 3 insertions(+)
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 15f9595..8373709 100644 index 17dbbe6..1bb1b01 100644
--- a/net/vmw_vsock/af_vsock.c --- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c
@@ -1840,6 +1840,9 @@ static const struct proto_ops vsock_stream_ops = { @@ -1852,6 +1852,9 @@ static const struct proto_ops vsock_stream_ops = {
static int vsock_create(struct net *net, struct socket *sock, static int vsock_create(struct net *net, struct socket *sock,
int protocol, int kern) int protocol, int kern)
{ {
@@ -27,5 +27,5 @@ index 15f9595..8373709 100644
return -EINVAL; return -EINVAL;
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From bab5ea0c103fe9e59b91e3f02ffd3a45c4e6be4a Mon Sep 17 00:00:00 2001 From 7dc9f307981ebd10ed716a56a0795af7c3f8ae90 Mon Sep 17 00:00:00 2001
From: Jake Oshins <jakeo@microsoft.com> From: Jake Oshins <jakeo@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:41 -0800 Date: Mon, 14 Dec 2015 16:01:41 -0800
Subject: [PATCH 10/40] drivers:hv: Define the channel type for Hyper-V PCI Subject: [PATCH 15/45] drivers:hv: Define the channel type for Hyper-V PCI
Express pass-through Express pass-through
This defines the channel type for PCI front-ends in Hyper-V VMs. This defines the channel type for PCI front-ends in Hyper-V VMs.
@@ -59,5 +59,5 @@ index ae6a711..10dda1e 100644
*/ */
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 6a0ed33229365bc267788283730cde6ddc0c7ff8 Mon Sep 17 00:00:00 2001 From 935d2a3a0446c6b2f256729bd91974e800bef25f Mon Sep 17 00:00:00 2001
From: "K. Y. Srinivasan" <kys@microsoft.com> From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:43 -0800 Date: Mon, 14 Dec 2015 16:01:43 -0800
Subject: [PATCH 11/40] Drivers: hv: vmbus: Use uuid_le type consistently Subject: [PATCH 16/45] Drivers: hv: vmbus: Use uuid_le type consistently
Consistently use uuid_le type in the Hyper-V driver code. Consistently use uuid_le type in the Hyper-V driver code.
@@ -30,10 +30,10 @@ index a77646b..38470aa 100644
perf_chn = true; perf_chn = true;
break; break;
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index f19b6f7..e64934e 100644 index 9b5440f..9aadcc2 100644
--- a/drivers/hv/vmbus_drv.c --- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c
@@ -531,7 +531,7 @@ static int vmbus_uevent(struct device *device, struct kobj_uevent_env *env) @@ -532,7 +532,7 @@ static int vmbus_uevent(struct device *device, struct kobj_uevent_env *env)
static const uuid_le null_guid; static const uuid_le null_guid;
@@ -42,7 +42,7 @@ index f19b6f7..e64934e 100644
{ {
if (memcmp(guid, &null_guid, sizeof(uuid_le))) if (memcmp(guid, &null_guid, sizeof(uuid_le)))
return false; return false;
@@ -544,9 +544,9 @@ static inline bool is_null_guid(const __u8 *guid) @@ -545,9 +545,9 @@ static inline bool is_null_guid(const __u8 *guid)
*/ */
static const struct hv_vmbus_device_id *hv_vmbus_get_id( static const struct hv_vmbus_device_id *hv_vmbus_get_id(
const struct hv_vmbus_device_id *id, const struct hv_vmbus_device_id *id,
@@ -54,7 +54,7 @@ index f19b6f7..e64934e 100644
if (!memcmp(&id->guid, guid, sizeof(uuid_le))) if (!memcmp(&id->guid, guid, sizeof(uuid_le)))
return id; return id;
@@ -563,7 +563,7 @@ static int vmbus_match(struct device *device, struct device_driver *driver) @@ -564,7 +564,7 @@ static int vmbus_match(struct device *device, struct device_driver *driver)
struct hv_driver *drv = drv_to_hv_drv(driver); struct hv_driver *drv = drv_to_hv_drv(driver);
struct hv_device *hv_dev = device_to_hv_device(device); struct hv_device *hv_dev = device_to_hv_device(device);
@@ -63,7 +63,7 @@ index f19b6f7..e64934e 100644
return 1; return 1;
return 0; return 0;
@@ -580,7 +580,7 @@ static int vmbus_probe(struct device *child_device) @@ -581,7 +581,7 @@ static int vmbus_probe(struct device *child_device)
struct hv_device *dev = device_to_hv_device(child_device); struct hv_device *dev = device_to_hv_device(child_device);
const struct hv_vmbus_device_id *dev_id; const struct hv_vmbus_device_id *dev_id;
@@ -280,7 +280,7 @@ index 64f36e0..6e4c645 100644
}; };
diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
index 5b96206..8adca44 100644 index 9f5cdd4..8e8c69b 100644
--- a/scripts/mod/file2alias.c --- a/scripts/mod/file2alias.c
+++ b/scripts/mod/file2alias.c +++ b/scripts/mod/file2alias.c
@@ -917,7 +917,7 @@ static int do_vmbus_entry(const char *filename, void *symval, @@ -917,7 +917,7 @@ static int do_vmbus_entry(const char *filename, void *symval,
@@ -293,5 +293,5 @@ index 5b96206..8adca44 100644
strcpy(alias, "vmbus:"); strcpy(alias, "vmbus:");
strcat(alias, guid_name); strcat(alias, guid_name);
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 2be59f6d9924239fa410a184d06fc7d0338c83fc Mon Sep 17 00:00:00 2001 From c5716756f56e498fc463458cfab19bb071f35469 Mon Sep 17 00:00:00 2001
From: "K. Y. Srinivasan" <kys@microsoft.com> From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:44 -0800 Date: Mon, 14 Dec 2015 16:01:44 -0800
Subject: [PATCH 12/40] Drivers: hv: vmbus: Use uuid_le_cmp() for comparing Subject: [PATCH 17/45] Drivers: hv: vmbus: Use uuid_le_cmp() for comparing
GUIDs GUIDs
Use uuid_le_cmp() for comparing GUIDs. Use uuid_le_cmp() for comparing GUIDs.
@@ -29,10 +29,10 @@ index 38470aa..dc4fb0b 100644
break; break;
} }
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index e64934e..aa4d8cc 100644 index 9aadcc2..bf54455 100644
--- a/drivers/hv/vmbus_drv.c --- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c
@@ -533,7 +533,7 @@ static const uuid_le null_guid; @@ -534,7 +534,7 @@ static const uuid_le null_guid;
static inline bool is_null_guid(const uuid_le *guid) static inline bool is_null_guid(const uuid_le *guid)
{ {
@@ -41,7 +41,7 @@ index e64934e..aa4d8cc 100644
return false; return false;
return true; return true;
} }
@@ -547,7 +547,7 @@ static const struct hv_vmbus_device_id *hv_vmbus_get_id( @@ -548,7 +548,7 @@ static const struct hv_vmbus_device_id *hv_vmbus_get_id(
const uuid_le *guid) const uuid_le *guid)
{ {
for (; !is_null_guid(&id->guid); id++) for (; !is_null_guid(&id->guid); id++)
@@ -51,5 +51,5 @@ index e64934e..aa4d8cc 100644
return NULL; return NULL;
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 5111097f2add634ea6b2964841ad15d9149da96e Mon Sep 17 00:00:00 2001 From fb0f783a35e7a0a9e8560fe5809b824bc1c8f8a8 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:47 -0800 Date: Mon, 14 Dec 2015 16:01:47 -0800
Subject: [PATCH 13/40] Drivers: hv: vmbus: serialize process_chn_event() and Subject: [PATCH 18/45] Drivers: hv: vmbus: serialize process_chn_event() and
vmbus_close_internal() vmbus_close_internal()
process_chn_event(), running in the tasklet, can race with process_chn_event(), running in the tasklet, can race with
@@ -83,5 +83,5 @@ index 9098f13..6a90c69 100644
} }
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From df06857f7adf8fb4bb6cacdc80e174c9cb20ee8a Mon Sep 17 00:00:00 2001 From 47057c1030e6a0e9ebd52e8268bb0fd7c6a3020c Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:48 -0800 Date: Mon, 14 Dec 2015 16:01:48 -0800
Subject: [PATCH 14/40] Drivers: hv: vmbus: do sanity check of channel state in Subject: [PATCH 19/45] Drivers: hv: vmbus: do sanity check of channel state in
vmbus_close_internal() vmbus_close_internal()
This fixes an incorrect assumption of channel state in the function. This fixes an incorrect assumption of channel state in the function.
@@ -38,5 +38,5 @@ index 6a90c69..b3c14ca 100644
channel->sc_creation_callback = NULL; channel->sc_creation_callback = NULL;
/* Stop callback and cancel the timer asap */ /* Stop callback and cancel the timer asap */
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 8f7f448b5f9b613f9dec4b4554062c6f709dce96 Mon Sep 17 00:00:00 2001 From 67631d9308a4405dfb219c287b3ec1d152946584 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:49 -0800 Date: Mon, 14 Dec 2015 16:01:49 -0800
Subject: [PATCH 15/40] Drivers: hv: vmbus: fix rescind-offer handling for Subject: [PATCH 20/45] Drivers: hv: vmbus: fix rescind-offer handling for
device without a driver device without a driver
In the path vmbus_onoffer_rescind() -> vmbus_device_unregister() -> In the path vmbus_onoffer_rescind() -> vmbus_device_unregister() ->
@@ -79,10 +79,10 @@ index dc4fb0b..7903acc 100644
vmbus_device_unregister(channel->device_obj); vmbus_device_unregister(channel->device_obj);
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index aa4d8cc..5a71b2a 100644 index bf54455..8bf1f31 100644
--- a/drivers/hv/vmbus_drv.c --- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c
@@ -602,23 +602,11 @@ static int vmbus_remove(struct device *child_device) @@ -603,23 +603,11 @@ static int vmbus_remove(struct device *child_device)
{ {
struct hv_driver *drv; struct hv_driver *drv;
struct hv_device *dev = device_to_hv_device(child_device); struct hv_device *dev = device_to_hv_device(child_device);
@@ -106,7 +106,7 @@ index aa4d8cc..5a71b2a 100644
} }
return 0; return 0;
@@ -653,7 +641,10 @@ static void vmbus_shutdown(struct device *child_device) @@ -654,7 +642,10 @@ static void vmbus_shutdown(struct device *child_device)
static void vmbus_device_release(struct device *device) static void vmbus_device_release(struct device *device)
{ {
struct hv_device *hv_dev = device_to_hv_device(device); struct hv_device *hv_dev = device_to_hv_device(device);
@@ -118,5 +118,5 @@ index aa4d8cc..5a71b2a 100644
} }
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From bf5d3511a1f0b147e32eec71f610e780fd11701a Mon Sep 17 00:00:00 2001 From 566fc2785f6bced720caae03060cadcb43faec0b Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:50 -0800 Date: Mon, 14 Dec 2015 16:01:50 -0800
Subject: [PATCH 16/40] Drivers: hv: vmbus: release relid on error in Subject: [PATCH 21/45] Drivers: hv: vmbus: release relid on error in
vmbus_process_offer() vmbus_process_offer()
We want to simplify vmbus_onoffer_rescind() by not invoking We want to simplify vmbus_onoffer_rescind() by not invoking
@@ -70,5 +70,5 @@ index 7903acc..9c9da3a 100644
} }
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From dfc150e7bb90be0591784b1f8a532670ee1c011a Mon Sep 17 00:00:00 2001 From abf8e10f0d5db38d8fbe6df7d1a680162f8d3ffa Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 14 Dec 2015 16:01:51 -0800 Date: Mon, 14 Dec 2015 16:01:51 -0800
Subject: [PATCH 17/40] Drivers: hv: vmbus: channge Subject: [PATCH 22/45] Drivers: hv: vmbus: channge
vmbus_connection.channel_lock to mutex vmbus_connection.channel_lock to mutex
spinlock is unnecessary here. spinlock is unnecessary here.
@@ -112,5 +112,5 @@ index 3782636..d9937be 100644
struct workqueue_struct *work_queue; struct workqueue_struct *work_queue;
}; };
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From c1ebc6e51a4e579e5b20ba4fd517e0da7dc563d1 Mon Sep 17 00:00:00 2001 From 4f9c87c89627ceae3958099a853e65bac51d2491 Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com> From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Mon, 14 Dec 2015 19:02:00 -0800 Date: Mon, 14 Dec 2015 19:02:00 -0800
Subject: [PATCH 18/40] Drivers: hv: remove code duplication between Subject: [PATCH 23/45] Drivers: hv: remove code duplication between
vmbus_recvpacket()/vmbus_recvpacket_raw() vmbus_recvpacket()/vmbus_recvpacket_raw()
vmbus_recvpacket() and vmbus_recvpacket_raw() are almost identical but vmbus_recvpacket() and vmbus_recvpacket_raw() are almost identical but
@@ -122,5 +122,5 @@ index 2889d97..dd6de7f 100644
} }
EXPORT_SYMBOL_GPL(vmbus_recvpacket_raw); EXPORT_SYMBOL_GPL(vmbus_recvpacket_raw);
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From cc4a6558e39bfc3042357669ff74d13f55e4f856 Mon Sep 17 00:00:00 2001 From 172dc2c2f9a6536990de55c4dabfd71ba62833f8 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 21 Dec 2015 12:21:22 -0800 Date: Mon, 21 Dec 2015 12:21:22 -0800
Subject: [PATCH 19/40] Drivers: hv: vmbus: fix the building warning with Subject: [PATCH 24/45] Drivers: hv: vmbus: fix the building warning with
hyperv-keyboard hyperv-keyboard
With the recent change af3ff643ea91ba64dd8d0b1cbed54d44512f96cd With the recent change af3ff643ea91ba64dd8d0b1cbed54d44512f96cd
@@ -68,5 +68,5 @@ index 4712d7d..9e2de6a 100644
*/ */
#define HV_VSS_GUID \ #define HV_VSS_GUID \
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From e9725d58380af9a2549b7fc76f512fc8bddea14b Mon Sep 17 00:00:00 2001 From de9460b1d17a51ab2b3a1811ae3992fec7e15ca2 Mon Sep 17 00:00:00 2001
From: "K. Y. Srinivasan" <kys@microsoft.com> From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Tue, 15 Dec 2015 16:27:27 -0800 Date: Tue, 15 Dec 2015 16:27:27 -0800
Subject: [PATCH 20/40] Drivers: hv: vmbus: Treat Fibre Channel devices as Subject: [PATCH 25/45] Drivers: hv: vmbus: Treat Fibre Channel devices as
performance critical performance critical
For performance critical devices, we distribute the incoming For performance critical devices, we distribute the incoming
@@ -38,5 +38,5 @@ index d013171..1c1ad47 100644
{ HV_NIC_GUID, }, { HV_NIC_GUID, },
/* NetworkDirect Guest RDMA */ /* NetworkDirect Guest RDMA */
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 396bc263df13d54d2468c5d2a7888fc9f4067894 Mon Sep 17 00:00:00 2001 From 2b8233468015c17304c8205eb16f798a9334155e Mon Sep 17 00:00:00 2001
From: "K. Y. Srinivasan" <kys@microsoft.com> From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Fri, 25 Dec 2015 20:00:30 -0800 Date: Fri, 25 Dec 2015 20:00:30 -0800
Subject: [PATCH 21/40] Drivers: hv: vmbus: Add vendor and device atttributes Subject: [PATCH 26/45] Drivers: hv: vmbus: Add vendor and device atttributes
Add vendor and device attributes to VMBUS devices. These will be used Add vendor and device attributes to VMBUS devices. These will be used
by Hyper-V tools as well user-level RDMA libraries that will use the by Hyper-V tools as well user-level RDMA libraries that will use the
@@ -259,10 +259,10 @@ index 1c1ad47..107d72f 100644
(vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) { (vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) {
/* /*
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 5a71b2a..3b83dfe 100644 index 8bf1f31..959b656 100644
--- a/drivers/hv/vmbus_drv.c --- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c
@@ -478,6 +478,24 @@ static ssize_t channel_vp_mapping_show(struct device *dev, @@ -479,6 +479,24 @@ static ssize_t channel_vp_mapping_show(struct device *dev,
} }
static DEVICE_ATTR_RO(channel_vp_mapping); static DEVICE_ATTR_RO(channel_vp_mapping);
@@ -287,7 +287,7 @@ index 5a71b2a..3b83dfe 100644
/* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */ /* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */
static struct attribute *vmbus_attrs[] = { static struct attribute *vmbus_attrs[] = {
&dev_attr_id.attr, &dev_attr_id.attr,
@@ -503,6 +521,8 @@ static struct attribute *vmbus_attrs[] = { @@ -504,6 +522,8 @@ static struct attribute *vmbus_attrs[] = {
&dev_attr_in_read_bytes_avail.attr, &dev_attr_in_read_bytes_avail.attr,
&dev_attr_in_write_bytes_avail.attr, &dev_attr_in_write_bytes_avail.attr,
&dev_attr_channel_vp_mapping.attr, &dev_attr_channel_vp_mapping.attr,
@@ -296,7 +296,7 @@ index 5a71b2a..3b83dfe 100644
NULL, NULL,
}; };
ATTRIBUTE_GROUPS(vmbus); ATTRIBUTE_GROUPS(vmbus);
@@ -957,6 +977,7 @@ struct hv_device *vmbus_device_create(const uuid_le *type, @@ -960,6 +980,7 @@ struct hv_device *vmbus_device_create(const uuid_le *type,
memcpy(&child_device_obj->dev_type, type, sizeof(uuid_le)); memcpy(&child_device_obj->dev_type, type, sizeof(uuid_le));
memcpy(&child_device_obj->dev_instance, instance, memcpy(&child_device_obj->dev_instance, instance,
sizeof(uuid_le)); sizeof(uuid_le));
@@ -351,5 +351,5 @@ index 9e2de6a..51c98fd 100644
struct device device; struct device device;
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 864ad2a0988143fecd3b3c9282777a6930b34d5e Mon Sep 17 00:00:00 2001 From 6a01db640f2950f33a51bc4549058d491c8d1314 Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com> From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Wed, 27 Jan 2016 22:29:34 -0800 Date: Wed, 27 Jan 2016 22:29:34 -0800
Subject: [PATCH 22/40] Drivers: hv: vmbus: avoid infinite loop in Subject: [PATCH 27/45] Drivers: hv: vmbus: avoid infinite loop in
init_vp_index() init_vp_index()
When we pick a CPU to use for a new subchannel we try find a non-used one When we pick a CPU to use for a new subchannel we try find a non-used one
@@ -45,5 +45,5 @@ index 107d72f..af1d82e 100644
cur_cpu = cpumask_next(cur_cpu, &available_mask); cur_cpu = cpumask_next(cur_cpu, &available_mask);
if (cur_cpu >= nr_cpu_ids) { if (cur_cpu >= nr_cpu_ids) {
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From fa2a38837ff00acfa5a1cb8fedbff33260f36653 Mon Sep 17 00:00:00 2001 From 798509ab822925cc9bbe9b3f7d56ceb5869aa61c Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com> From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Wed, 27 Jan 2016 22:29:35 -0800 Date: Wed, 27 Jan 2016 22:29:35 -0800
Subject: [PATCH 23/40] Drivers: hv: vmbus: avoid scheduling in interrupt Subject: [PATCH 28/45] Drivers: hv: vmbus: avoid scheduling in interrupt
context in vmbus_initiate_unload() context in vmbus_initiate_unload()
We have to call vmbus_initiate_unload() on crash to make kdump work but We have to call vmbus_initiate_unload() on crash to make kdump work but
@@ -95,5 +95,5 @@ index af1d82e..d6c6114 100644
/* /*
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From d24827511ecaab87a0763e43a13e7c742338ca90 Mon Sep 17 00:00:00 2001 From 1cea598f33747f6f221600e4135fa1c7eb0339b3 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:37 -0800 Date: Wed, 27 Jan 2016 22:29:37 -0800
Subject: [PATCH 24/40] Drivers: hv: vmbus: add a helper function to set a Subject: [PATCH 29/45] Drivers: hv: vmbus: add a helper function to set a
channel's pending send size channel's pending send size
This will be used by the coming net/hvsock driver. This will be used by the coming net/hvsock driver.
@@ -32,5 +32,5 @@ index 51c98fd..934542a 100644
int vmbus_request_offers(void); int vmbus_request_offers(void);
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 81caa506ed7b4af65b15c7f79e2cd82ba9565d8e Mon Sep 17 00:00:00 2001 From 484e9fa916f11b145201a8473142977d260f924a Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:38 -0800 Date: Wed, 27 Jan 2016 22:29:38 -0800
Subject: [PATCH 25/40] Drivers: hv: vmbus: define the new offer type for Subject: [PATCH 30/45] Drivers: hv: vmbus: define the new offer type for
Hyper-V socket (hvsock) Hyper-V socket (hvsock)
A helper function is also added. A helper function is also added.
@@ -40,5 +40,5 @@ index 934542a..a4f105d 100644
enum hv_signal_policy policy) enum hv_signal_policy policy)
{ {
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 460356566f2e7d6a3b8f70154d0208620ee93063 Mon Sep 17 00:00:00 2001 From e076eff3df2c5592ca711a0f3b9665ee9a9981c7 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:39 -0800 Date: Wed, 27 Jan 2016 22:29:39 -0800
Subject: [PATCH 26/40] Drivers: hv: vmbus: vmbus_sendpacket_ctl: hvsock: avoid Subject: [PATCH 31/45] Drivers: hv: vmbus: vmbus_sendpacket_ctl: hvsock: avoid
unnecessary signaling unnecessary signaling
When the hvsock channel's outbound ringbuffer is full (i.e., When the hvsock channel's outbound ringbuffer is full (i.e.,
@@ -41,5 +41,5 @@ index dd6de7f..128dcf2 100644
return ret; return ret;
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 7b1854b482f0a1e22e3ff5dfc5111ae968354b55 Mon Sep 17 00:00:00 2001 From ef0c006df50c1a0f91827f2f84bb0936952e1361 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:40 -0800 Date: Wed, 27 Jan 2016 22:29:40 -0800
Subject: [PATCH 27/40] Drivers: hv: vmbus: define a new VMBus message type for Subject: [PATCH 32/45] Drivers: hv: vmbus: define a new VMBus message type for
hvsock hvsock
A function to send the type of message is also added. A function to send the type of message is also added.
@@ -97,5 +97,5 @@ index a4f105d..191bc5d 100644
+ const uuid_le *shv_host_servie_id); + const uuid_le *shv_host_servie_id);
#endif /* _HYPERV_H */ #endif /* _HYPERV_H */
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 80bf79d36dbffe682ccc62c8a4210e3ba4a6ecd3 Mon Sep 17 00:00:00 2001 From 8271769466aa4286ad5fe29e7bddc85701965314 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:41 -0800 Date: Wed, 27 Jan 2016 22:29:41 -0800
Subject: [PATCH 28/40] Drivers: hv: vmbus: add a hvsock flag in struct Subject: [PATCH 33/45] Drivers: hv: vmbus: add a hvsock flag in struct
hv_driver hv_driver
Only the coming hv_sock driver has a "true" value for this flag. Only the coming hv_sock driver has a "true" value for this flag.
@@ -20,10 +20,10 @@ Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2 files changed, 18 insertions(+) 2 files changed, 18 insertions(+)
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 3b83dfe..d76a65f 100644 index 959b656..d46b4ff 100644
--- a/drivers/hv/vmbus_drv.c --- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c
@@ -583,6 +583,10 @@ static int vmbus_match(struct device *device, struct device_driver *driver) @@ -584,6 +584,10 @@ static int vmbus_match(struct device *device, struct device_driver *driver)
struct hv_driver *drv = drv_to_hv_drv(driver); struct hv_driver *drv = drv_to_hv_drv(driver);
struct hv_device *hv_dev = device_to_hv_device(device); struct hv_device *hv_dev = device_to_hv_device(device);
@@ -60,5 +60,5 @@ index 191bc5d..05966e2 100644
uuid_le dev_type; uuid_le dev_type;
const struct hv_vmbus_device_id *id_table; const struct hv_vmbus_device_id *id_table;
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 82e699723be65d45ae64999eb60bde5cbb016368 Mon Sep 17 00:00:00 2001 From 810c87d25aae195016a8d3a90aa3cf7258e2c7cc Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:42 -0800 Date: Wed, 27 Jan 2016 22:29:42 -0800
Subject: [PATCH 29/40] Drivers: hv: vmbus: add a per-channel rescind callback Subject: [PATCH 34/45] Drivers: hv: vmbus: add a per-channel rescind callback
This will be used by the coming hv_sock driver. This will be used by the coming hv_sock driver.
@@ -68,5 +68,5 @@ index 05966e2..ad04017 100644
* Retrieve the (sub) channel on which to send an outgoing request. * Retrieve the (sub) channel on which to send an outgoing request.
* When a primary channel has multiple sub-channels, we choose a * When a primary channel has multiple sub-channels, we choose a
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From d29f80db881010c9eb7f7f09e33adcb2ec815f58 Mon Sep 17 00:00:00 2001 From b20109841eac978453e3514b51fb93aa292ece09 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:43 -0800 Date: Wed, 27 Jan 2016 22:29:43 -0800
Subject: [PATCH 30/40] Drivers: hv: vmbus: add an API Subject: [PATCH 35/45] Drivers: hv: vmbus: add an API
vmbus_hvsock_device_unregister() vmbus_hvsock_device_unregister()
The hvsock driver needs this API to release all the resources related The hvsock driver needs this API to release all the resources related
@@ -149,5 +149,5 @@ index ad04017..993318a 100644
resource_size_t min, resource_size_t max, resource_size_t min, resource_size_t max,
resource_size_t size, resource_size_t align, resource_size_t size, resource_size_t align,
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 73a45a3f46df47b046fea2fa656602a3422ba54a Mon Sep 17 00:00:00 2001 From b6ad0bd0eb97a4704e3e4f857282f41dfc8a08d8 Mon Sep 17 00:00:00 2001
From: "K. Y. Srinivasan" <kys@microsoft.com> From: "K. Y. Srinivasan" <kys@microsoft.com>
Date: Wed, 27 Jan 2016 22:29:45 -0800 Date: Wed, 27 Jan 2016 22:29:45 -0800
Subject: [PATCH 31/40] Drivers: hv: vmbus: Give control over how the ring Subject: [PATCH 36/45] Drivers: hv: vmbus: Give control over how the ring
access is serialized access is serialized
On the channel send side, many of the VMBUS On the channel send side, many of the VMBUS
@@ -204,5 +204,5 @@ index 993318a..6c9695e 100644
{ {
return !!(c->offermsg.offer.chn_flags & return !!(c->offermsg.offer.chn_flags &
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From aa5c8776d3db2e7f09804631aec6d8e7b7265332 Mon Sep 17 00:00:00 2001 From f465de8d2676b9f125255e8f3eec5e631d07d60e Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com> From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Fri, 26 Feb 2016 15:13:16 -0800 Date: Fri, 26 Feb 2016 15:13:16 -0800
Subject: [PATCH 32/40] Drivers: hv: vmbus: avoid wait_for_completion() on Subject: [PATCH 37/45] Drivers: hv: vmbus: avoid wait_for_completion() on
crash crash
MIME-Version: 1.0 MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8 Content-Type: text/plain; charset=UTF-8
@@ -74,10 +74,10 @@ index b925fa3..10efab0 100644
static inline void hv_poll_channel(struct vmbus_channel *channel, static inline void hv_poll_channel(struct vmbus_channel *channel,
void (*cb)(void *)) void (*cb)(void *))
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index d76a65f..45ea71e 100644 index d46b4ff..f5f57ee 100644
--- a/drivers/hv/vmbus_drv.c --- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c
@@ -1263,7 +1263,7 @@ static void hv_kexec_handler(void) @@ -1266,7 +1266,7 @@ static void hv_kexec_handler(void)
int cpu; int cpu;
hv_synic_clockevents_cleanup(); hv_synic_clockevents_cleanup();
@@ -86,7 +86,7 @@ index d76a65f..45ea71e 100644
for_each_online_cpu(cpu) for_each_online_cpu(cpu)
smp_call_function_single(cpu, hv_synic_cleanup, NULL, 1); smp_call_function_single(cpu, hv_synic_cleanup, NULL, 1);
hv_cleanup(); hv_cleanup();
@@ -1271,7 +1271,7 @@ static void hv_kexec_handler(void) @@ -1274,7 +1274,7 @@ static void hv_kexec_handler(void)
static void hv_crash_handler(struct pt_regs *regs) static void hv_crash_handler(struct pt_regs *regs)
{ {
@@ -96,5 +96,5 @@ index d76a65f..45ea71e 100644
* In crash handler we can't schedule synic cleanup for all CPUs, * In crash handler we can't schedule synic cleanup for all CPUs,
* doing the cleanup for current CPU only. This should be sufficient * doing the cleanup for current CPU only. This should be sufficient
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From f6eda6e9caee4b2eac5c5ad7a63002b7691f2102 Mon Sep 17 00:00:00 2001 From d7b1742f30b68fae9c0bd627704106f7665c30ed Mon Sep 17 00:00:00 2001
From: Vitaly Kuznetsov <vkuznets@redhat.com> From: Vitaly Kuznetsov <vkuznets@redhat.com>
Date: Fri, 26 Feb 2016 15:13:18 -0800 Date: Fri, 26 Feb 2016 15:13:18 -0800
Subject: [PATCH 33/40] Drivers: hv: vmbus: avoid unneeded compiler Subject: [PATCH 38/45] Drivers: hv: vmbus: avoid unneeded compiler
optimizations in vmbus_wait_for_unload() optimizations in vmbus_wait_for_unload()
MIME-Version: 1.0 MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8 Content-Type: text/plain; charset=UTF-8
@@ -35,5 +35,5 @@ index f70e352..c892db5 100644
continue; continue;
} }
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From f377090a6256e4bbe8bc26ee524a03ea198fb402 Mon Sep 17 00:00:00 2001 From 556c8b33b1690096d9b216184b0c892e058e8d76 Mon Sep 17 00:00:00 2001
From: Tom Herbert <tom@herbertland.com> From: Tom Herbert <tom@herbertland.com>
Date: Mon, 7 Mar 2016 14:11:06 -0800 Date: Mon, 7 Mar 2016 14:11:06 -0800
Subject: [PATCH 34/40] kcm: Kernel Connection Multiplexor module Subject: [PATCH 39/45] kcm: Kernel Connection Multiplexor module
This module implements the Kernel Connection Multiplexor. This module implements the Kernel Connection Multiplexor.
@@ -2308,5 +2308,5 @@ index 0000000..649d246
+MODULE_LICENSE("GPL"); +MODULE_LICENSE("GPL");
+MODULE_ALIAS_NETPROTO(PF_KCM); +MODULE_ALIAS_NETPROTO(PF_KCM);
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 1aaf078c41b6150c42a40ca165003c5b902ba541 Mon Sep 17 00:00:00 2001 From d3f41a6e22aecae1e0502dadc7ff1847f9b64af0 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 21 Mar 2016 02:51:09 -0700 Date: Mon, 21 Mar 2016 02:51:09 -0700
Subject: [PATCH 35/40] net: add the AF_KCM entries to family name tables Subject: [PATCH 40/45] net: add the AF_KCM entries to family name tables
This is for the recent kcm driver, which introduces AF_KCM(41) in This is for the recent kcm driver, which introduces AF_KCM(41) in
b7ac4eb(kcm: Kernel Connection Multiplexor module). b7ac4eb(kcm: Kernel Connection Multiplexor module).
@@ -48,5 +48,5 @@ index 0d91f7d..925def4 100644
/* /*
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 46cb53195e3a65ab11c6447b855ef9684f29812b Mon Sep 17 00:00:00 2001 From 8d3d67fb44d8d9b44b9923d1cbddf9285b9b68ae Mon Sep 17 00:00:00 2001
From: Courtney Cavin <courtney.cavin@sonymobile.com> From: Courtney Cavin <courtney.cavin@sonymobile.com>
Date: Wed, 27 Apr 2016 12:13:03 -0700 Date: Wed, 27 Apr 2016 12:13:03 -0700
Subject: [PATCH 36/40] net: Add Qualcomm IPC router Subject: [PATCH 41/45] net: Add Qualcomm IPC router
Add an implementation of Qualcomm's IPC router protocol, used to Add an implementation of Qualcomm's IPC router protocol, used to
communicate with service providing remote processors. communicate with service providing remote processors.
@@ -1303,5 +1303,5 @@ index 0000000..84ebce7
+MODULE_DESCRIPTION("Qualcomm IPC-Router SMD interface driver"); +MODULE_DESCRIPTION("Qualcomm IPC-Router SMD interface driver");
+MODULE_LICENSE("GPL v2"); +MODULE_LICENSE("GPL v2");
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 04b3d63b757432d90ce323a748b85a866114d9ac Mon Sep 17 00:00:00 2001 From 9d627b917f9bf189f938e30fe4c6b11c241e204e Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Sun, 15 May 2016 09:53:11 -0700 Date: Sun, 15 May 2016 09:53:11 -0700
Subject: [PATCH 37/40] hv_sock: introduce Hyper-V Sockets Subject: [PATCH 42/45] hv_sock: introduce Hyper-V Sockets
Hyper-V Sockets (hv_sock) supplies a byte-stream based communication Hyper-V Sockets (hv_sock) supplies a byte-stream based communication
mechanism between the host and the guest. It's somewhat like TCP over mechanism between the host and the guest. It's somewhat like TCP over
@@ -41,10 +41,10 @@ Origin: https://patchwork.ozlabs.org/patch/622404/
create mode 100644 net/hv_sock/af_hvsock.c create mode 100644 net/hv_sock/af_hvsock.c
diff --git a/MAINTAINERS b/MAINTAINERS diff --git a/MAINTAINERS b/MAINTAINERS
index 12d49f5..fa87bdd 100644 index fa94182..ff17e76 100644
--- a/MAINTAINERS --- a/MAINTAINERS
+++ b/MAINTAINERS +++ b/MAINTAINERS
@@ -5123,7 +5123,9 @@ F: drivers/input/serio/hyperv-keyboard.c @@ -5136,7 +5136,9 @@ F: drivers/input/serio/hyperv-keyboard.c
F: drivers/net/hyperv/ F: drivers/net/hyperv/
F: drivers/scsi/storvsc_drv.c F: drivers/scsi/storvsc_drv.c
F: drivers/video/fbdev/hyperv_fb.c F: drivers/video/fbdev/hyperv_fb.c
@@ -1801,5 +1801,5 @@ index 0000000..b91bd60
+MODULE_DESCRIPTION("Hyper-V Sockets"); +MODULE_DESCRIPTION("Hyper-V Sockets");
+MODULE_LICENSE("Dual BSD/GPL"); +MODULE_LICENSE("Dual BSD/GPL");
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 179a4a8d5ad76cab98d2bc4b0f2633898d59e9f8 Mon Sep 17 00:00:00 2001 From 0436560566688b27d0a1f505481b2bb3b738888e Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Mon, 21 Mar 2016 02:53:08 -0700 Date: Mon, 21 Mar 2016 02:53:08 -0700
Subject: [PATCH 38/40] net: add the AF_HYPERV entries to family name tables Subject: [PATCH 43/45] net: add the AF_HYPERV entries to family name tables
This is for the hv_sock driver, which introduces AF_HYPERV(42). This is for the hv_sock driver, which introduces AF_HYPERV(42).
@@ -45,5 +45,5 @@ index 925def4..323f7a3 100644
/* /*
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From 6857a15b84ea9a592cfbe8a9064d77a63d49bc68 Mon Sep 17 00:00:00 2001 From 2fdbef30e06e66b0de2ddc9a424db56501cf546b Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui@microsoft.com> From: Dexuan Cui <decui@microsoft.com>
Date: Sat, 21 May 2016 16:55:50 +0800 Date: Sat, 21 May 2016 16:55:50 +0800
Subject: [PATCH 39/40] Drivers: hv: vmbus: fix the race when querying & Subject: [PATCH 44/45] Drivers: hv: vmbus: fix the race when querying &
updating the percpu list updating the percpu list
There is a rare race when we remove an entry from the global list There is a rare race when we remove an entry from the global list
@@ -129,5 +129,5 @@ index c892db5..0a54317 100644
err_free_chan: err_free_chan:
free_channel(newchannel); free_channel(newchannel);
-- --
2.9.0 2.9.3

View File

@@ -1,7 +1,7 @@
From e3e9e646e4cd69f7dec7f5acc4a30cb1beb6e95a Mon Sep 17 00:00:00 2001 From 24c21ccb6524b1ecff1ff47ef8b39375b763d79f Mon Sep 17 00:00:00 2001
From: Rolf Neugebauer <rolf.neugebauer@gmail.com> From: Rolf Neugebauer <rolf.neugebauer@gmail.com>
Date: Mon, 23 May 2016 18:55:45 +0100 Date: Mon, 23 May 2016 18:55:45 +0100
Subject: [PATCH 40/40] vmbus: Don't spam the logs with unknown GUIDs Subject: [PATCH 45/45] vmbus: Don't spam the logs with unknown GUIDs
With Hyper-V sockets device types are introduced on the fly. The pr_info() With Hyper-V sockets device types are introduced on the fly. The pr_info()
then prints a message on every connection, which is way too verbose. Since then prints a message on every connection, which is way too verbose. Since
@@ -26,5 +26,5 @@ index 0a54317..120ee22 100644
} }
-- --
2.9.0 2.9.3