Browse code

vendor: update buildkit

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>

Tonis Tiigi authored on 2017/09/22 09:54:27
Showing 20 changed files
... ...
@@ -2,7 +2,6 @@
2 2
 github.com/Azure/go-ansiterm 19f72df4d05d31cbe1c56bfc8045c96babff6c7e
3 3
 github.com/Microsoft/hcsshim v0.6.5
4 4
 github.com/Microsoft/go-winio v0.4.5
5
-github.com/moby/buildkit da2b9dc7dab99e824b2b1067ad7d0523e32dd2d9 https://github.com/dmcgowan/buildkit.git
6 5
 github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
7 6
 github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a
8 7
 github.com/go-check/check 4ed411733c5785b40214c70bce814c3a3a689609 https://github.com/cpuguy83/check.git
... ...
@@ -28,6 +27,8 @@ github.com/imdario/mergo 0.2.1
28 28
 golang.org/x/sync de49d9dcd27d4f764488181bea099dfe6179bcf0
29 29
 
30 30
 github.com/containerd/continuity 22694c680ee48fb8f50015b44618517e2bde77e8
31
+github.com/moby/buildkit c2dbdeb457ea665699a5d97f79eebfac4ab4726f https://github.com/tonistiigi/buildkit.git
32
+github.com/tonistiigi/fsutil 1dedf6e90084bd88c4c518a15e68a37ed1370203
31 33
 
32 34
 #get libnetwork packages
33 35
 github.com/docker/libnetwork 60e002dd61885e1cd909582f00f7eb4da634518a
... ...
@@ -107,7 +108,6 @@ google.golang.org/genproto d80a6e20e776b0b17a324d0ba1ab50a39c8e8944
107 107
 github.com/containerd/containerd 06b9cb35161009dcb7123345749fef02f7cea8e0
108 108
 github.com/tonistiigi/fifo 1405643975692217d6720f8b54aeee1bf2cd5cf4
109 109
 github.com/stevvooe/continuity cd7a8e21e2b6f84799f5dd4b65faf49c8d3ee02d
110
-github.com/tonistiigi/fsutil 0ac4c11b053b9c5c7c47558f81f96c7100ce50fb
111 110
 
112 111
 # cluster
113 112
 github.com/docker/swarmkit bd7bafb8a61de1f5f23c8215ce7b9ecbcb30ff21
... ...
@@ -1,10 +1,16 @@
1
-### Important: This repository is in an early development phase and not suitable for practical workloads. It does not compare with `docker build` features yet.
1
+### Important: This repository is in an early development phase
2 2
 
3 3
 [![asciicinema example](https://asciinema.org/a/gPEIEo1NzmDTUu2bEPsUboqmU.png)](https://asciinema.org/a/gPEIEo1NzmDTUu2bEPsUboqmU)
4 4
 
5 5
 
6 6
 ## BuildKit
7 7
 
8
+<!-- godoc is mainly for LLB stuff -->
9
+[![GoDoc](https://godoc.org/github.com/moby/buildkit?status.svg)](https://godoc.org/github.com/moby/buildkit/client/llb)
10
+[![Build Status](https://travis-ci.org/moby/buildkit.svg?branch=master)](https://travis-ci.org/moby/buildkit)
11
+[![Go Report Card](https://goreportcard.com/badge/github.com/moby/buildkit)](https://goreportcard.com/report/github.com/moby/buildkit)
12
+
13
+
8 14
 BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.
9 15
 
10 16
 Key features:
... ...
@@ -23,7 +29,7 @@ Read the proposal from https://github.com/moby/moby/issues/32925
23 23
 
24 24
 #### Quick start
25 25
 
26
-BuildKit daemon can be built in two different versions: one that uses [containerd](https://github.com/containerd/containerd) for execution and distribution, and a standalone version that doesn't have other dependencies apart from [runc](https://github.com/opencontainers/runc). We are open for adding more backends. `buildd` is a CLI utility for running the gRPC API. 
26
+BuildKit daemon can be built in two different versions: one that uses [containerd](https://github.com/containerd/containerd) for execution and distribution, and a standalone version that doesn't have other dependencies apart from [runc](https://github.com/opencontainers/runc). We are open for adding more backends. `buildd` is a CLI utility for serving the gRPC API. 
27 27
 
28 28
 ```bash
29 29
 # buildd daemon (choose one)
... ...
@@ -36,17 +42,15 @@ go build -o buildctl ./cmd/buildctl
36 36
 
37 37
 You can also use `make binaries` that prepares all binaries into the `bin/` directory.
38 38
 
39
-The first thing to test could be to try building BuildKit with BuildKit. BuildKit provides a low-level solver format that could be used by multiple build definitions. Preparation work for making the Dockerfile parser reusable as a frontend is tracked in https://github.com/moby/moby/pull/33492. As no frontends have been integrated yet we currently have to use a client library to generate this low-level definition.
40
-
41 39
 `examples/buildkit*` directory contains scripts that define how to build different configurations of BuildKit and its dependencies using the `client` package. Running one of these script generates a protobuf definition of a build graph. Note that the script itself does not execute any steps of the build.
42 40
 
43
-You can use `buildctl debug dump-llb` to see what data is this definition.
41
+You can use `buildctl debug dump-llb` to see what data is in this definition. Add `--dot` to generate dot layout.
44 42
 
45 43
 ```bash
46 44
 go run examples/buildkit0/buildkit.go | buildctl debug dump-llb | jq .
47 45
 ```
48 46
 
49
-To start building use `buildctl build` command. The script accepts `--target` flag to choose between `containerd` and `standalone` configurations. In standalone mode BuildKit binaries are built together with `runc`. In containerd mode, the `containerd` binary is built as well from the upstream repo.
47
+To start building use `buildctl build` command. The example script accepts `--target` flag to choose between `containerd` and `standalone` configurations. In standalone mode BuildKit binaries are built together with `runc`. In containerd mode, the `containerd` binary is built as well from the upstream repo.
50 48
 
51 49
 ```bash
52 50
 go run examples/buildkit0/buildkit.go | buildctl build
... ...
@@ -59,10 +63,52 @@ Different versions of the example scripts show different ways of describing the
59 59
 - `./examples/buildkit0` - uses only exec operations, defines a full stage per component.
60 60
 - `./examples/buildkit1` - cloning git repositories has been separated for extra concurrency.
61 61
 - `./examples/buildkit2` - uses git sources directly instead of running `git clone`, allowing better performance and much safer caching.
62
+- `./examples/buildkit3` - allows using local source files for separate components eg. `./buildkit3 --runc=local | buildctl build --local runc-src=some/local/path`  
63
+- `./examples/dockerfile2llb` - can be used to convert a Dockerfile to LLB for debugging purposes
64
+- `./examples/gobuild` - shows how to use nested invocation to generate LLB for Go package internal dependencies
65
+
66
+
67
+#### Examples
68
+
69
+##### Starting the buildd daemon:
70
+
71
+```
72
+buildd-standalone --debug --root /var/lib/buildkit
73
+```
74
+
75
+##### Building a Dockerfile:
76
+
77
+```
78
+buildctl build --frontend=dockerfile.v0 --local context=. --local dockerfile=.
79
+```
80
+
81
+`context` and `dockerfile` should point to local directories for build context and Dockerfile location.
82
+
83
+
84
+##### Exporting resulting image to containerd
85
+
86
+Containerd version of buildd needs to be used
87
+
88
+```
89
+buildctl build ... --exporter=image --exporter-opt name=docker.io/username/image
90
+ctr --namespace=buildkit images ls
91
+```
92
+
93
+##### Exporting build result back to client
94
+
95
+```
96
+buildctl build ... --exporter=local --exporter-opt output=path/to/output-dir
97
+```
98
+
99
+#### View build cache
100
+
101
+```
102
+buildctl du -v
103
+```
62 104
 
63 105
 #### Supported runc version
64 106
 
65
-During development buildkit is tested with the version of runc that is being used by the containerd repository. Please refer to [runc.md](https://github.com/containerd/containerd/blob/3707703a694187c7d08e2f333da6ddd58bcb729d/RUNC.md) for more information.
107
+During development buildkit is tested with the version of runc that is being used by the containerd repository. Please refer to [runc.md](https://github.com/containerd/containerd/blob/d1e11f17ec7b325f89608dd46c128300b8727d50/RUNC.md) for more information.
66 108
 
67 109
 
68 110
 #### Contributing
69 111
new file mode 100644
... ...
@@ -0,0 +1,22 @@
0
+package session
1
+
2
+import "context"
3
+
4
+type contextKeyT string
5
+
6
+var contextKey = contextKeyT("buildkit/session-id")
7
+
8
+func NewContext(ctx context.Context, id string) context.Context {
9
+	if id != "" {
10
+		return context.WithValue(ctx, contextKey, id)
11
+	}
12
+	return ctx
13
+}
14
+
15
+func FromContext(ctx context.Context) string {
16
+	v := ctx.Value(contextKey)
17
+	if v == nil {
18
+		return ""
19
+	}
20
+	return v.(string)
21
+}
... ...
@@ -1,31 +1,55 @@
1 1
 package filesync
2 2
 
3 3
 import (
4
+	"os"
4 5
 	"time"
5 6
 
6
-	"google.golang.org/grpc"
7
-
8 7
 	"github.com/sirupsen/logrus"
9 8
 	"github.com/tonistiigi/fsutil"
9
+	"google.golang.org/grpc"
10 10
 )
11 11
 
12
-func sendDiffCopy(stream grpc.Stream, dir string, includes, excludes []string, progress progressCb) error {
12
+func sendDiffCopy(stream grpc.Stream, dir string, includes, excludes []string, progress progressCb, _map func(*fsutil.Stat) bool) error {
13 13
 	return fsutil.Send(stream.Context(), stream, dir, &fsutil.WalkOpt{
14 14
 		ExcludePatterns: excludes,
15
-		IncludePaths:    includes, // TODO: rename IncludePatterns
15
+		IncludePatterns: includes,
16
+		Map:             _map,
16 17
 	}, progress)
17 18
 }
18 19
 
19
-func recvDiffCopy(ds grpc.Stream, dest string, cu CacheUpdater) error {
20
+func recvDiffCopy(ds grpc.Stream, dest string, cu CacheUpdater, progress progressCb) error {
20 21
 	st := time.Now()
21 22
 	defer func() {
22 23
 		logrus.Debugf("diffcopy took: %v", time.Since(st))
23 24
 	}()
24 25
 	var cf fsutil.ChangeFunc
26
+	var ch fsutil.ContentHasher
25 27
 	if cu != nil {
26 28
 		cu.MarkSupported(true)
27 29
 		cf = cu.HandleChange
30
+		ch = cu.ContentHasher()
28 31
 	}
32
+	return fsutil.Receive(ds.Context(), ds, dest, fsutil.ReceiveOpt{
33
+		NotifyHashed:  cf,
34
+		ContentHasher: ch,
35
+		ProgressCb:    progress,
36
+	})
37
+}
29 38
 
30
-	return fsutil.Receive(ds.Context(), ds, dest, cf)
39
+func syncTargetDiffCopy(ds grpc.Stream, dest string) error {
40
+	if err := os.MkdirAll(dest, 0700); err != nil {
41
+		return err
42
+	}
43
+	return fsutil.Receive(ds.Context(), ds, dest, fsutil.ReceiveOpt{
44
+		Merge: true,
45
+		Filter: func() func(*fsutil.Stat) bool {
46
+			uid := os.Getuid()
47
+			gid := os.Getgid()
48
+			return func(st *fsutil.Stat) bool {
49
+				st.Uid = uint32(uid)
50
+				st.Gid = uint32(gid)
51
+				return true
52
+			}
53
+		}(),
54
+	})
31 55
 }
... ...
@@ -1,6 +1,7 @@
1 1
 package filesync
2 2
 
3 3
 import (
4
+	"fmt"
4 5
 	"os"
5 6
 	"strings"
6 7
 
... ...
@@ -15,20 +16,29 @@ import (
15 15
 const (
16 16
 	keyOverrideExcludes = "override-excludes"
17 17
 	keyIncludePatterns  = "include-patterns"
18
+	keyDirName          = "dir-name"
18 19
 )
19 20
 
20 21
 type fsSyncProvider struct {
21
-	root     string
22
-	excludes []string
23
-	p        progressCb
24
-	doneCh   chan error
22
+	dirs   map[string]SyncedDir
23
+	p      progressCb
24
+	doneCh chan error
25
+}
26
+
27
+type SyncedDir struct {
28
+	Name     string
29
+	Dir      string
30
+	Excludes []string
31
+	Map      func(*fsutil.Stat) bool
25 32
 }
26 33
 
27 34
 // NewFSSyncProvider creates a new provider for sending files from client
28
-func NewFSSyncProvider(root string, excludes []string) session.Attachable {
35
+func NewFSSyncProvider(dirs []SyncedDir) session.Attachable {
29 36
 	p := &fsSyncProvider{
30
-		root:     root,
31
-		excludes: excludes,
37
+		dirs: map[string]SyncedDir{},
38
+	}
39
+	for _, d := range dirs {
40
+		p.dirs[d.Name] = d
32 41
 	}
33 42
 	return p
34 43
 }
... ...
@@ -58,9 +68,19 @@ func (sp *fsSyncProvider) handle(method string, stream grpc.ServerStream) error
58 58
 
59 59
 	opts, _ := metadata.FromContext(stream.Context()) // if no metadata continue with empty object
60 60
 
61
+	name, ok := opts[keyDirName]
62
+	if !ok || len(name) != 1 {
63
+		return errors.New("no dir name in request")
64
+	}
65
+
66
+	dir, ok := sp.dirs[name[0]]
67
+	if !ok {
68
+		return errors.Errorf("no access allowed to dir %q", name[0])
69
+	}
70
+
61 71
 	var excludes []string
62 72
 	if len(opts[keyOverrideExcludes]) == 0 || opts[keyOverrideExcludes][0] != "true" {
63
-		excludes = sp.excludes
73
+		excludes = dir.Excludes
64 74
 	}
65 75
 	includes := opts[keyIncludePatterns]
66 76
 
... ...
@@ -75,7 +95,7 @@ func (sp *fsSyncProvider) handle(method string, stream grpc.ServerStream) error
75 75
 		doneCh = sp.doneCh
76 76
 		sp.doneCh = nil
77 77
 	}
78
-	err := pr.sendFn(stream, sp.root, includes, excludes, progress)
78
+	err := pr.sendFn(stream, dir.Dir, includes, excludes, progress, dir.Map)
79 79
 	if doneCh != nil {
80 80
 		if err != nil {
81 81
 			doneCh <- err
... ...
@@ -94,8 +114,8 @@ type progressCb func(int, bool)
94 94
 
95 95
 type protocol struct {
96 96
 	name   string
97
-	sendFn func(stream grpc.Stream, srcDir string, includes, excludes []string, progress progressCb) error
98
-	recvFn func(stream grpc.Stream, destDir string, cu CacheUpdater) error
97
+	sendFn func(stream grpc.Stream, srcDir string, includes, excludes []string, progress progressCb, _map func(*fsutil.Stat) bool) error
98
+	recvFn func(stream grpc.Stream, destDir string, cu CacheUpdater, progress progressCb) error
99 99
 }
100 100
 
101 101
 func isProtoSupported(p string) bool {
... ...
@@ -112,25 +132,23 @@ var supportedProtocols = []protocol{
112 112
 		sendFn: sendDiffCopy,
113 113
 		recvFn: recvDiffCopy,
114 114
 	},
115
-	{
116
-		name:   "tarstream",
117
-		sendFn: sendTarStream,
118
-		recvFn: recvTarStream,
119
-	},
120 115
 }
121 116
 
122 117
 // FSSendRequestOpt defines options for FSSend request
123 118
 type FSSendRequestOpt struct {
119
+	Name             string
124 120
 	IncludePatterns  []string
125 121
 	OverrideExcludes bool
126 122
 	DestDir          string
127 123
 	CacheUpdater     CacheUpdater
124
+	ProgressCb       func(int, bool)
128 125
 }
129 126
 
130 127
 // CacheUpdater is an object capable of sending notifications for the cache hash changes
131 128
 type CacheUpdater interface {
132 129
 	MarkSupported(bool)
133 130
 	HandleChange(fsutil.ChangeKind, string, os.FileInfo, error) error
131
+	ContentHasher() fsutil.ContentHasher
134 132
 }
135 133
 
136 134
 // FSSync initializes a transfer of files
... ...
@@ -155,6 +173,8 @@ func FSSync(ctx context.Context, c session.Caller, opt FSSendRequestOpt) error {
155 155
 		opts[keyIncludePatterns] = opt.IncludePatterns
156 156
 	}
157 157
 
158
+	opts[keyDirName] = []string{opt.Name}
159
+
158 160
 	ctx, cancel := context.WithCancel(ctx)
159 161
 	defer cancel()
160 162
 
... ...
@@ -177,7 +197,45 @@ func FSSync(ctx context.Context, c session.Caller, opt FSSendRequestOpt) error {
177 177
 			return err
178 178
 		}
179 179
 		stream = cc
180
+	default:
181
+		panic(fmt.Sprintf("invalid protocol: %q", pr.name))
182
+	}
183
+
184
+	return pr.recvFn(stream, opt.DestDir, opt.CacheUpdater, opt.ProgressCb)
185
+}
186
+
187
+// NewFSSyncTarget allows writing into a directory
188
+func NewFSSyncTarget(outdir string) session.Attachable {
189
+	p := &fsSyncTarget{
190
+		outdir: outdir,
191
+	}
192
+	return p
193
+}
194
+
195
+type fsSyncTarget struct {
196
+	outdir string
197
+}
198
+
199
+func (sp *fsSyncTarget) Register(server *grpc.Server) {
200
+	RegisterFileSendServer(server, sp)
201
+}
202
+
203
+func (sp *fsSyncTarget) DiffCopy(stream FileSend_DiffCopyServer) error {
204
+	return syncTargetDiffCopy(stream, sp.outdir)
205
+}
206
+
207
+func CopyToCaller(ctx context.Context, srcPath string, c session.Caller, progress func(int, bool)) error {
208
+	method := session.MethodURL(_FileSend_serviceDesc.ServiceName, "diffcopy")
209
+	if !c.Supports(method) {
210
+		return errors.Errorf("method %s not supported by the client", method)
211
+	}
212
+
213
+	client := NewFileSendClient(c.Conn())
214
+
215
+	cc, err := client.DiffCopy(ctx)
216
+	if err != nil {
217
+		return err
180 218
 	}
181 219
 
182
-	return pr.recvFn(stream, opt.DestDir, opt.CacheUpdater)
220
+	return sendDiffCopy(cc, srcPath, nil, nil, progress, nil)
183 221
 }
... ...
@@ -277,6 +277,102 @@ var _FileSync_serviceDesc = grpc.ServiceDesc{
277 277
 	Metadata: "filesync.proto",
278 278
 }
279 279
 
280
+// Client API for FileSend service
281
+
282
+type FileSendClient interface {
283
+	DiffCopy(ctx context.Context, opts ...grpc.CallOption) (FileSend_DiffCopyClient, error)
284
+}
285
+
286
+type fileSendClient struct {
287
+	cc *grpc.ClientConn
288
+}
289
+
290
+func NewFileSendClient(cc *grpc.ClientConn) FileSendClient {
291
+	return &fileSendClient{cc}
292
+}
293
+
294
+func (c *fileSendClient) DiffCopy(ctx context.Context, opts ...grpc.CallOption) (FileSend_DiffCopyClient, error) {
295
+	stream, err := grpc.NewClientStream(ctx, &_FileSend_serviceDesc.Streams[0], c.cc, "/moby.filesync.v1.FileSend/DiffCopy", opts...)
296
+	if err != nil {
297
+		return nil, err
298
+	}
299
+	x := &fileSendDiffCopyClient{stream}
300
+	return x, nil
301
+}
302
+
303
+type FileSend_DiffCopyClient interface {
304
+	Send(*BytesMessage) error
305
+	Recv() (*BytesMessage, error)
306
+	grpc.ClientStream
307
+}
308
+
309
+type fileSendDiffCopyClient struct {
310
+	grpc.ClientStream
311
+}
312
+
313
+func (x *fileSendDiffCopyClient) Send(m *BytesMessage) error {
314
+	return x.ClientStream.SendMsg(m)
315
+}
316
+
317
+func (x *fileSendDiffCopyClient) Recv() (*BytesMessage, error) {
318
+	m := new(BytesMessage)
319
+	if err := x.ClientStream.RecvMsg(m); err != nil {
320
+		return nil, err
321
+	}
322
+	return m, nil
323
+}
324
+
325
+// Server API for FileSend service
326
+
327
+type FileSendServer interface {
328
+	DiffCopy(FileSend_DiffCopyServer) error
329
+}
330
+
331
+func RegisterFileSendServer(s *grpc.Server, srv FileSendServer) {
332
+	s.RegisterService(&_FileSend_serviceDesc, srv)
333
+}
334
+
335
+func _FileSend_DiffCopy_Handler(srv interface{}, stream grpc.ServerStream) error {
336
+	return srv.(FileSendServer).DiffCopy(&fileSendDiffCopyServer{stream})
337
+}
338
+
339
+type FileSend_DiffCopyServer interface {
340
+	Send(*BytesMessage) error
341
+	Recv() (*BytesMessage, error)
342
+	grpc.ServerStream
343
+}
344
+
345
+type fileSendDiffCopyServer struct {
346
+	grpc.ServerStream
347
+}
348
+
349
+func (x *fileSendDiffCopyServer) Send(m *BytesMessage) error {
350
+	return x.ServerStream.SendMsg(m)
351
+}
352
+
353
+func (x *fileSendDiffCopyServer) Recv() (*BytesMessage, error) {
354
+	m := new(BytesMessage)
355
+	if err := x.ServerStream.RecvMsg(m); err != nil {
356
+		return nil, err
357
+	}
358
+	return m, nil
359
+}
360
+
361
+var _FileSend_serviceDesc = grpc.ServiceDesc{
362
+	ServiceName: "moby.filesync.v1.FileSend",
363
+	HandlerType: (*FileSendServer)(nil),
364
+	Methods:     []grpc.MethodDesc{},
365
+	Streams: []grpc.StreamDesc{
366
+		{
367
+			StreamName:    "DiffCopy",
368
+			Handler:       _FileSend_DiffCopy_Handler,
369
+			ServerStreams: true,
370
+			ClientStreams: true,
371
+		},
372
+	},
373
+	Metadata: "filesync.proto",
374
+}
375
+
280 376
 func (m *BytesMessage) Marshal() (dAtA []byte, err error) {
281 377
 	size := m.Size()
282 378
 	dAtA = make([]byte, size)
... ...
@@ -558,7 +654,7 @@ var (
558 558
 func init() { proto.RegisterFile("filesync.proto", fileDescriptorFilesync) }
559 559
 
560 560
 var fileDescriptorFilesync = []byte{
561
-	// 198 bytes of a gzipped FileDescriptorProto
561
+	// 208 bytes of a gzipped FileDescriptorProto
562 562
 	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0x4b, 0xcb, 0xcc, 0x49,
563 563
 	0x2d, 0xae, 0xcc, 0x4b, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x12, 0xc8, 0xcd, 0x4f, 0xaa,
564 564
 	0xd4, 0x83, 0x0b, 0x96, 0x19, 0x2a, 0x29, 0x71, 0xf1, 0x38, 0x55, 0x96, 0xa4, 0x16, 0xfb, 0xa6,
... ...
@@ -566,10 +662,10 @@ var fileDescriptorFilesync = []byte{
566 566
 	0x30, 0x6a, 0xf0, 0x04, 0x81, 0xd9, 0x46, 0xab, 0x19, 0xb9, 0x38, 0xdc, 0x32, 0x73, 0x52, 0x83,
567 567
 	0x2b, 0xf3, 0x92, 0x85, 0xfc, 0xb8, 0x38, 0x5c, 0x32, 0xd3, 0xd2, 0x9c, 0xf3, 0x0b, 0x2a, 0x85,
568 568
 	0xe4, 0xf4, 0xd0, 0xcd, 0xd3, 0x43, 0x36, 0x4c, 0x8a, 0x80, 0xbc, 0x06, 0xa3, 0x01, 0xa3, 0x90,
569
-	0x3f, 0x17, 0x67, 0x48, 0x62, 0x51, 0x70, 0x49, 0x51, 0x6a, 0x62, 0x2e, 0x35, 0x0c, 0x74, 0x32,
570
-	0xbb, 0xf0, 0x50, 0x8e, 0xe1, 0xc6, 0x43, 0x39, 0x86, 0x0f, 0x0f, 0xe5, 0x18, 0x1b, 0x1e, 0xc9,
571
-	0x31, 0xae, 0x78, 0x24, 0xc7, 0x78, 0xe2, 0x91, 0x1c, 0xe3, 0x85, 0x47, 0x72, 0x8c, 0x0f, 0x1e,
572
-	0xc9, 0x31, 0xbe, 0x78, 0x24, 0xc7, 0xf0, 0xe1, 0x91, 0x1c, 0xe3, 0x84, 0xc7, 0x72, 0x0c, 0x51,
573
-	0x1c, 0x30, 0xb3, 0x92, 0xd8, 0xc0, 0x41, 0x64, 0x0c, 0x08, 0x00, 0x00, 0xff, 0xff, 0x5f, 0x0c,
574
-	0x8d, 0xc5, 0x34, 0x01, 0x00, 0x00,
569
+	0x3f, 0x17, 0x67, 0x48, 0x62, 0x51, 0x70, 0x49, 0x51, 0x6a, 0x62, 0x2e, 0x35, 0x0c, 0x34, 0x8a,
570
+	0x82, 0x3a, 0x36, 0x35, 0x2f, 0x85, 0xda, 0x8e, 0x75, 0x32, 0xbb, 0xf0, 0x50, 0x8e, 0xe1, 0xc6,
571
+	0x43, 0x39, 0x86, 0x0f, 0x0f, 0xe5, 0x18, 0x1b, 0x1e, 0xc9, 0x31, 0xae, 0x78, 0x24, 0xc7, 0x78,
572
+	0xe2, 0x91, 0x1c, 0xe3, 0x85, 0x47, 0x72, 0x8c, 0x0f, 0x1e, 0xc9, 0x31, 0xbe, 0x78, 0x24, 0xc7,
573
+	0xf0, 0xe1, 0x91, 0x1c, 0xe3, 0x84, 0xc7, 0x72, 0x0c, 0x51, 0x1c, 0x30, 0xb3, 0x92, 0xd8, 0xc0,
574
+	0xc1, 0x6f, 0x0c, 0x08, 0x00, 0x00, 0xff, 0xff, 0x72, 0x81, 0x1a, 0x91, 0x90, 0x01, 0x00, 0x00,
575 575
 }
... ...
@@ -9,6 +9,11 @@ service FileSync{
9 9
   rpc TarStream(stream BytesMessage) returns (stream BytesMessage);
10 10
 }
11 11
 
12
+service FileSend{
13
+  rpc DiffCopy(stream BytesMessage) returns (stream BytesMessage);
14
+}
15
+
16
+
12 17
 // BytesMessage contains a chunk of byte data
13 18
 message BytesMessage{
14 19
 	bytes data = 1;
15 20
deleted file mode 100644
... ...
@@ -1,83 +0,0 @@
1
-package filesync
2
-
3
-import (
4
-	"io"
5
-
6
-	"github.com/docker/docker/pkg/archive"
7
-	"github.com/docker/docker/pkg/chrootarchive"
8
-	"github.com/pkg/errors"
9
-	"github.com/sirupsen/logrus"
10
-	"google.golang.org/grpc"
11
-)
12
-
13
-func sendTarStream(stream grpc.Stream, dir string, includes, excludes []string, progress progressCb) error {
14
-	a, err := archive.TarWithOptions(dir, &archive.TarOptions{
15
-		ExcludePatterns: excludes,
16
-	})
17
-	if err != nil {
18
-		return err
19
-	}
20
-
21
-	size := 0
22
-	buf := make([]byte, 1<<15)
23
-	t := new(BytesMessage)
24
-	for {
25
-		n, err := a.Read(buf)
26
-		if err != nil {
27
-			if err == io.EOF {
28
-				break
29
-			}
30
-			return err
31
-		}
32
-		t.Data = buf[:n]
33
-
34
-		if err := stream.SendMsg(t); err != nil {
35
-			return err
36
-		}
37
-		size += n
38
-		if progress != nil {
39
-			progress(size, false)
40
-		}
41
-	}
42
-	if progress != nil {
43
-		progress(size, true)
44
-	}
45
-	return nil
46
-}
47
-
48
-func recvTarStream(ds grpc.Stream, dest string, cs CacheUpdater) error {
49
-
50
-	pr, pw := io.Pipe()
51
-
52
-	go func() {
53
-		var (
54
-			err error
55
-			t   = new(BytesMessage)
56
-		)
57
-		for {
58
-			if err = ds.RecvMsg(t); err != nil {
59
-				if err == io.EOF {
60
-					err = nil
61
-				}
62
-				break
63
-			}
64
-			_, err = pw.Write(t.Data)
65
-			if err != nil {
66
-				break
67
-			}
68
-		}
69
-		if err = pw.CloseWithError(err); err != nil {
70
-			logrus.Errorf("failed to close tar transfer pipe")
71
-		}
72
-	}()
73
-
74
-	decompressedStream, err := archive.DecompressStream(pr)
75
-	if err != nil {
76
-		return errors.Wrap(err, "failed to decompress stream")
77
-	}
78
-
79
-	if err := chrootarchive.Untar(decompressedStream, dest, nil); err != nil {
80
-		return errors.Wrap(err, "failed to untar context")
81
-	}
82
-	return nil
83
-}
... ...
@@ -49,14 +49,14 @@ func (sm *Manager) HandleHTTPRequest(ctx context.Context, w http.ResponseWriter,
49 49
 		return errors.New("handler does not support hijack")
50 50
 	}
51 51
 
52
-	uuid := r.Header.Get(headerSessionUUID)
52
+	id := r.Header.Get(headerSessionID)
53 53
 
54 54
 	proto := r.Header.Get("Upgrade")
55 55
 
56 56
 	sm.mu.Lock()
57
-	if _, ok := sm.sessions[uuid]; ok {
57
+	if _, ok := sm.sessions[id]; ok {
58 58
 		sm.mu.Unlock()
59
-		return errors.Errorf("session %s already exists", uuid)
59
+		return errors.Errorf("session %s already exists", id)
60 60
 	}
61 61
 
62 62
 	if proto == "" {
... ...
@@ -102,8 +102,10 @@ func (sm *Manager) handleConn(ctx context.Context, conn net.Conn, opts map[strin
102 102
 	ctx, cancel := context.WithCancel(ctx)
103 103
 	defer cancel()
104 104
 
105
+	opts = canonicalHeaders(opts)
106
+
105 107
 	h := http.Header(opts)
106
-	uuid := h.Get(headerSessionUUID)
108
+	id := h.Get(headerSessionID)
107 109
 	name := h.Get(headerSessionName)
108 110
 	sharedKey := h.Get(headerSessionSharedKey)
109 111
 
... ...
@@ -115,7 +117,7 @@ func (sm *Manager) handleConn(ctx context.Context, conn net.Conn, opts map[strin
115 115
 
116 116
 	c := &client{
117 117
 		Session: Session{
118
-			uuid:      uuid,
118
+			id:        id,
119 119
 			name:      name,
120 120
 			sharedKey: sharedKey,
121 121
 			ctx:       ctx,
... ...
@@ -129,13 +131,13 @@ func (sm *Manager) handleConn(ctx context.Context, conn net.Conn, opts map[strin
129 129
 	for _, m := range opts[headerSessionMethod] {
130 130
 		c.supported[strings.ToLower(m)] = struct{}{}
131 131
 	}
132
-	sm.sessions[uuid] = c
132
+	sm.sessions[id] = c
133 133
 	sm.updateCondition.Broadcast()
134 134
 	sm.mu.Unlock()
135 135
 
136 136
 	defer func() {
137 137
 		sm.mu.Lock()
138
-		delete(sm.sessions, uuid)
138
+		delete(sm.sessions, id)
139 139
 		sm.mu.Unlock()
140 140
 	}()
141 141
 
... ...
@@ -146,8 +148,8 @@ func (sm *Manager) handleConn(ctx context.Context, conn net.Conn, opts map[strin
146 146
 	return nil
147 147
 }
148 148
 
149
-// Get returns a session by UUID
150
-func (sm *Manager) Get(ctx context.Context, uuid string) (Caller, error) {
149
+// Get returns a session by ID
150
+func (sm *Manager) Get(ctx context.Context, id string) (Caller, error) {
151 151
 	ctx, cancel := context.WithCancel(ctx)
152 152
 	defer cancel()
153 153
 
... ...
@@ -165,11 +167,11 @@ func (sm *Manager) Get(ctx context.Context, uuid string) (Caller, error) {
165 165
 		select {
166 166
 		case <-ctx.Done():
167 167
 			sm.mu.Unlock()
168
-			return nil, errors.Wrapf(ctx.Err(), "no active session for %s", uuid)
168
+			return nil, errors.Wrapf(ctx.Err(), "no active session for %s", id)
169 169
 		default:
170 170
 		}
171 171
 		var ok bool
172
-		c, ok = sm.sessions[uuid]
172
+		c, ok = sm.sessions[id]
173 173
 		if !ok || c.closed() {
174 174
 			sm.updateCondition.Wait()
175 175
 			continue
... ...
@@ -200,3 +202,11 @@ func (c *client) Supports(url string) bool {
200 200
 func (c *client) Conn() *grpc.ClientConn {
201 201
 	return c.cc
202 202
 }
203
+
204
+func canonicalHeaders(in map[string][]string) map[string][]string {
205
+	out := map[string][]string{}
206
+	for k := range in {
207
+		out[http.CanonicalHeaderKey(k)] = in[k]
208
+	}
209
+	return out
210
+}
... ...
@@ -12,7 +12,7 @@ import (
12 12
 )
13 13
 
14 14
 const (
15
-	headerSessionUUID      = "X-Docker-Expose-Session-Uuid"
15
+	headerSessionID        = "X-Docker-Expose-Session-Uuid"
16 16
 	headerSessionName      = "X-Docker-Expose-Session-Name"
17 17
 	headerSessionSharedKey = "X-Docker-Expose-Session-Sharedkey"
18 18
 	headerSessionMethod    = "X-Docker-Expose-Session-Grpc-Method"
... ...
@@ -28,7 +28,7 @@ type Attachable interface {
28 28
 
29 29
 // Session is a long running connection between client and a daemon
30 30
 type Session struct {
31
-	uuid       string
31
+	id         string
32 32
 	name       string
33 33
 	sharedKey  string
34 34
 	ctx        context.Context
... ...
@@ -39,9 +39,9 @@ type Session struct {
39 39
 
40 40
 // NewSession returns a new long running session
41 41
 func NewSession(name, sharedKey string) (*Session, error) {
42
-	uuid := stringid.GenerateRandomID()
42
+	id := stringid.GenerateRandomID()
43 43
 	s := &Session{
44
-		uuid:       uuid,
44
+		id:         id,
45 45
 		name:       name,
46 46
 		sharedKey:  sharedKey,
47 47
 		grpcServer: grpc.NewServer(),
... ...
@@ -57,9 +57,9 @@ func (s *Session) Allow(a Attachable) {
57 57
 	a.Register(s.grpcServer)
58 58
 }
59 59
 
60
-// UUID returns unique identifier for the session
61
-func (s *Session) UUID() string {
62
-	return s.uuid
60
+// ID returns unique identifier for the session
61
+func (s *Session) ID() string {
62
+	return s.id
63 63
 }
64 64
 
65 65
 // Run activates the session
... ...
@@ -72,7 +72,7 @@ func (s *Session) Run(ctx context.Context, dialer Dialer) error {
72 72
 	defer close(s.done)
73 73
 
74 74
 	meta := make(map[string][]string)
75
-	meta[headerSessionUUID] = []string{s.uuid}
75
+	meta[headerSessionID] = []string{s.id}
76 76
 	meta[headerSessionName] = []string{s.name}
77 77
 	meta[headerSessionSharedKey] = []string{s.sharedKey}
78 78
 
... ...
@@ -92,6 +92,7 @@ func (s *Session) Run(ctx context.Context, dialer Dialer) error {
92 92
 // Close closes the session
93 93
 func (s *Session) Close() error {
94 94
 	if s.cancelCtx != nil && s.done != nil {
95
+		s.grpcServer.Stop()
95 96
 		s.cancelCtx()
96 97
 		<-s.done
97 98
 	}
... ...
@@ -6,26 +6,26 @@ github.com/davecgh/go-spew v1.1.0
6 6
 github.com/pmezard/go-difflib v1.0.0
7 7
 golang.org/x/sys 739734461d1c916b6c72a63d7efda2b27edb369f
8 8
 
9
-github.com/containerd/containerd 3707703a694187c7d08e2f333da6ddd58bcb729d
10
-golang.org/x/sync 450f422ab23cf9881c94e2db30cac0eb1b7cf80c
11
-github.com/Sirupsen/logrus v0.11.0
9
+github.com/containerd/containerd d1e11f17ec7b325f89608dd46c128300b8727d50
10
+golang.org/x/sync f52d1811a62927559de87708c8913c1650ce4f26
11
+github.com/sirupsen/logrus v1.0.0
12 12
 google.golang.org/grpc v1.3.0
13 13
 github.com/opencontainers/go-digest 21dfd564fd89c944783d00d069f33e3e7123c448
14 14
 golang.org/x/net 1f9224279e98554b6a6432d4dd998a739f8b2b7c
15 15
 github.com/gogo/protobuf d2e1ade2d719b78fe5b061b4c18a9f7111b5bdc8
16 16
 github.com/golang/protobuf 5a0f697c9ed9d68fef0116532c6e05cfeae00e55
17 17
 github.com/containerd/continuity 86cec1535a968310e7532819f699ff2830ed7463
18
-github.com/opencontainers/image-spec v1.0.0-rc6
19
-github.com/opencontainers/runc 429a5387123625040bacfbb60d96b1cbd02293ab
18
+github.com/opencontainers/image-spec v1.0.0
19
+github.com/opencontainers/runc e775f0fba3ea329b8b766451c892c41a3d49594d
20 20
 github.com/Microsoft/go-winio v0.4.1
21 21
 github.com/containerd/fifo 69b99525e472735860a5269b75af1970142b3062
22
-github.com/opencontainers/runtime-spec 198f23f827eea397d4331d7eb048d9d4c7ff7bee
22
+github.com/opencontainers/runtime-spec 96de01bbb42c7af89bff100e10a9f0fb62e75bfb
23 23
 github.com/containerd/go-runc 2774a2ea124a5c2d0aba13b5c2dd8a5a9a48775d
24 24
 github.com/containerd/console 7fed77e673ca4abcd0cbd6d4d0e0e22137cbd778
25
-github.com/Azure/go-ansiterm fa152c58bc15761d0200cb75fe958b89a9d4888e
25
+github.com/Azure/go-ansiterm 19f72df4d05d31cbe1c56bfc8045c96babff6c7e
26 26
 google.golang.org/genproto d80a6e20e776b0b17a324d0ba1ab50a39c8e8944
27 27
 golang.org/x/text 19e51611da83d6be54ddafce4a4af510cb3e9ea4
28
-github.com/docker/go-events aa2e3b613fbbfdddbe055a7b9e3ce271cfd83eca
28
+github.com/docker/go-events 9461782956ad83b30282bf90e31fa6a70c255ba9
29 29
 
30 30
 github.com/urfave/cli d70f47eeca3afd795160003bc6e28b001d60c67c
31 31
 github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
... ...
@@ -33,8 +33,14 @@ github.com/google/shlex 6f45313302b9c56850fc17f99e40caebce98c716
33 33
 golang.org/x/time 8be79e1e0910c292df4e79c241bb7e8f7e725959
34 34
 
35 35
 github.com/BurntSushi/locker 392720b78f44e9d0249fcac6c43b111b47a370b8
36
-github.com/docker/docker 05c7c311390911daebcf5d9519dee813fc02a887
36
+github.com/docker/docker 6f723db8c6f0c7f0b252674a9673a25b5978db04 https://github.com/tonistiigi/docker.git
37 37
 github.com/pkg/profile 5b67d428864e92711fcbd2f8629456121a56d91f
38 38
 
39
-github.com/tonistiigi/fsutil 0ac4c11b053b9c5c7c47558f81f96c7100ce50fb
39
+github.com/tonistiigi/fsutil 1dedf6e90084bd88c4c518a15e68a37ed1370203
40 40
 github.com/stevvooe/continuity 86cec1535a968310e7532819f699ff2830ed7463
41
+github.com/dmcgowan/go-tar 2e2c51242e8993c50445dab7c03c8e7febddd0cf
42
+github.com/hashicorp/go-immutable-radix 826af9ccf0feeee615d546d69b11f8e98da8c8f1 git://github.com/tonistiigi/go-immutable-radix.git
43
+github.com/hashicorp/golang-lru a0d98a5f288019575c6d1f4bb1573fef2d1fcdc4
44
+github.com/mitchellh/hashstructure 2bca23e0e452137f789efbc8610126fd8b94f73b
45
+github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
46
+github.com/docker/distribution 30578ca32960a4d368bf6db67b0a33c2a1f3dc6f
... ...
@@ -1,6 +1,7 @@
1 1
 package fsutil
2 2
 
3 3
 import (
4
+	"hash"
4 5
 	"os"
5 6
 
6 7
 	"golang.org/x/net/context"
... ...
@@ -14,6 +15,8 @@ func Changes(ctx context.Context, a, b walkerFn, changeFn ChangeFunc) error {
14 14
 
15 15
 type HandleChangeFn func(ChangeKind, string, os.FileInfo, error) error
16 16
 
17
+type ContentHasher func(*Stat) (hash.Hash, error)
18
+
17 19
 func GetWalkerFn(root string) walkerFn {
18 20
 	return func(ctx context.Context, pathC chan<- *currentPath) error {
19 21
 		return Walk(ctx, root, nil, func(path string, f os.FileInfo, err error) error {
... ...
@@ -35,3 +38,7 @@ func GetWalkerFn(root string) walkerFn {
35 35
 		})
36 36
 	}
37 37
 }
38
+
39
+func emptyWalker(ctx context.Context, pathC chan<- *currentPath) error {
40
+	return nil
41
+}
... ...
@@ -1,11 +1,6 @@
1
-// +build linux windows
2
-
3 1
 package fsutil
4 2
 
5 3
 import (
6
-	"archive/tar"
7
-	"crypto/sha256"
8
-	"encoding/hex"
9 4
 	"hash"
10 5
 	"io"
11 6
 	"os"
... ...
@@ -14,8 +9,7 @@ import (
14 14
 	"sync"
15 15
 	"time"
16 16
 
17
-	"github.com/docker/docker/pkg/archive"
18
-	"github.com/docker/docker/pkg/tarsum"
17
+	digest "github.com/opencontainers/go-digest"
19 18
 	"github.com/pkg/errors"
20 19
 	"golang.org/x/net/context"
21 20
 	"golang.org/x/sync/errgroup"
... ...
@@ -24,11 +18,15 @@ import (
24 24
 type WriteToFunc func(context.Context, string, io.WriteCloser) error
25 25
 
26 26
 type DiskWriterOpt struct {
27
-	AsyncDataCb WriteToFunc
28
-	SyncDataCb  WriteToFunc
29
-	NotifyCb    func(ChangeKind, string, os.FileInfo, error) error
27
+	AsyncDataCb   WriteToFunc
28
+	SyncDataCb    WriteToFunc
29
+	NotifyCb      func(ChangeKind, string, os.FileInfo, error) error
30
+	ContentHasher ContentHasher
31
+	Filter        FilterFunc
30 32
 }
31 33
 
34
+type FilterFunc func(*Stat) bool
35
+
32 36
 type DiskWriter struct {
33 37
 	opt  DiskWriterOpt
34 38
 	dest string
... ...
@@ -37,6 +35,7 @@ type DiskWriter struct {
37 37
 	ctx    context.Context
38 38
 	cancel func()
39 39
 	eg     *errgroup.Group
40
+	filter FilterFunc
40 41
 }
41 42
 
42 43
 func NewDiskWriter(ctx context.Context, dest string, opt DiskWriterOpt) (*DiskWriter, error) {
... ...
@@ -102,6 +101,12 @@ func (dw *DiskWriter) HandleChange(kind ChangeKind, p string, fi os.FileInfo, er
102 102
 		return errors.Errorf("%s invalid change without stat information", p)
103 103
 	}
104 104
 
105
+	if dw.filter != nil {
106
+		if ok := dw.filter(stat); !ok {
107
+			return nil
108
+		}
109
+	}
110
+
105 111
 	rename := true
106 112
 	oldFi, err := os.Lstat(destPath)
107 113
 	if err != nil {
... ...
@@ -202,7 +207,7 @@ func (dw *DiskWriter) processChange(kind ChangeKind, p string, fi os.FileInfo, w
202 202
 	var hw *hashedWriter
203 203
 	if dw.opt.NotifyCb != nil {
204 204
 		var err error
205
-		if hw, err = newHashWriter(p, fi, w); err != nil {
205
+		if hw, err = newHashWriter(dw.opt.ContentHasher, fi, w); err != nil {
206 206
 			return err
207 207
 		}
208 208
 		w = hw
... ...
@@ -229,13 +234,18 @@ func (dw *DiskWriter) processChange(kind ChangeKind, p string, fi os.FileInfo, w
229 229
 type hashedWriter struct {
230 230
 	os.FileInfo
231 231
 	io.Writer
232
-	h   hash.Hash
233
-	w   io.WriteCloser
234
-	sum string
232
+	h    hash.Hash
233
+	w    io.WriteCloser
234
+	dgst digest.Digest
235 235
 }
236 236
 
237
-func newHashWriter(p string, fi os.FileInfo, w io.WriteCloser) (*hashedWriter, error) {
238
-	h, err := NewTarsumHash(p, fi)
237
+func newHashWriter(ch ContentHasher, fi os.FileInfo, w io.WriteCloser) (*hashedWriter, error) {
238
+	stat, ok := fi.Sys().(*Stat)
239
+	if !ok {
240
+		return nil, errors.Errorf("invalid change without stat information")
241
+	}
242
+
243
+	h, err := ch(stat)
239 244
 	if err != nil {
240 245
 		return nil, err
241 246
 	}
... ...
@@ -249,15 +259,15 @@ func newHashWriter(p string, fi os.FileInfo, w io.WriteCloser) (*hashedWriter, e
249 249
 }
250 250
 
251 251
 func (hw *hashedWriter) Close() error {
252
-	hw.sum = string(hex.EncodeToString(hw.h.Sum(nil)))
252
+	hw.dgst = digest.NewDigest(digest.SHA256, hw.h)
253 253
 	if hw.w != nil {
254 254
 		return hw.w.Close()
255 255
 	}
256 256
 	return nil
257 257
 }
258 258
 
259
-func (hw *hashedWriter) Hash() string {
260
-	return hw.sum
259
+func (hw *hashedWriter) Digest() digest.Digest {
260
+	return hw.dgst
261 261
 }
262 262
 
263 263
 type lazyFileWriter struct {
... ...
@@ -310,44 +320,3 @@ func nextSuffix() string {
310 310
 	randmu.Unlock()
311 311
 	return strconv.Itoa(int(1e9 + r%1e9))[1:]
312 312
 }
313
-
314
-func NewTarsumHash(p string, fi os.FileInfo) (hash.Hash, error) {
315
-	stat, ok := fi.Sys().(*Stat)
316
-	link := ""
317
-	if ok {
318
-		link = stat.Linkname
319
-	}
320
-	if fi.IsDir() {
321
-		p += string(os.PathSeparator)
322
-	}
323
-	h, err := archive.FileInfoHeader(p, fi, link)
324
-	if err != nil {
325
-		return nil, err
326
-	}
327
-	h.Name = p
328
-	if ok {
329
-		h.Uid = int(stat.Uid)
330
-		h.Gid = int(stat.Gid)
331
-		h.Linkname = stat.Linkname
332
-		if stat.Xattrs != nil {
333
-			h.Xattrs = make(map[string]string)
334
-			for k, v := range stat.Xattrs {
335
-				h.Xattrs[k] = string(v)
336
-			}
337
-		}
338
-	}
339
-	tsh := &tarsumHash{h: h, Hash: sha256.New()}
340
-	tsh.Reset()
341
-	return tsh, nil
342
-}
343
-
344
-// Reset resets the Hash to its initial state.
345
-func (tsh *tarsumHash) Reset() {
346
-	tsh.Hash.Reset()
347
-	tarsum.WriteV1Header(tsh.h, tsh.Hash)
348
-}
349
-
350
-type tarsumHash struct {
351
-	hash.Hash
352
-	h *tar.Header
353
-}
354 313
new file mode 100644
... ...
@@ -0,0 +1,7 @@
0
+// +build darwin
1
+
2
+package fsutil
3
+
4
+func chtimes(path string, un int64) error {
5
+	return nil
6
+}
... ...
@@ -3,36 +3,10 @@
3 3
 package fsutil
4 4
 
5 5
 import (
6
-	"os"
7
-	"syscall"
8
-
9 6
 	"github.com/pkg/errors"
10
-	"github.com/stevvooe/continuity/sysx"
11 7
 	"golang.org/x/sys/unix"
12 8
 )
13 9
 
14
-func rewriteMetadata(p string, stat *Stat) error {
15
-	for key, value := range stat.Xattrs {
16
-		sysx.Setxattr(p, key, value, 0)
17
-	}
18
-
19
-	if err := os.Lchown(p, int(stat.Uid), int(stat.Gid)); err != nil {
20
-		return errors.Wrapf(err, "failed to lchown %s", p)
21
-	}
22
-
23
-	if os.FileMode(stat.Mode)&os.ModeSymlink == 0 {
24
-		if err := os.Chmod(p, os.FileMode(stat.Mode)); err != nil {
25
-			return errors.Wrapf(err, "failed to chown %s", p)
26
-		}
27
-	}
28
-
29
-	if err := chtimes(p, stat.ModTime); err != nil {
30
-		return errors.Wrapf(err, "failed to chtimes %s", p)
31
-	}
32
-
33
-	return nil
34
-}
35
-
36 10
 func chtimes(path string, un int64) error {
37 11
 	var utimes [2]unix.Timespec
38 12
 	utimes[0] = unix.NsecToTimespec(un)
... ...
@@ -44,21 +18,3 @@ func chtimes(path string, un int64) error {
44 44
 
45 45
 	return nil
46 46
 }
47
-
48
-// handleTarTypeBlockCharFifo is an OS-specific helper function used by
49
-// createTarFile to handle the following types of header: Block; Char; Fifo
50
-func handleTarTypeBlockCharFifo(path string, stat *Stat) error {
51
-	mode := uint32(stat.Mode & 07777)
52
-	if os.FileMode(stat.Mode)&os.ModeCharDevice != 0 {
53
-		mode |= syscall.S_IFCHR
54
-	} else if os.FileMode(stat.Mode)&os.ModeNamedPipe != 0 {
55
-		mode |= syscall.S_IFIFO
56
-	} else {
57
-		mode |= syscall.S_IFBLK
58
-	}
59
-
60
-	if err := syscall.Mknod(path, mode, int(mkdev(stat.Devmajor, stat.Devminor))); err != nil {
61
-		return err
62
-	}
63
-	return nil
64
-}
65 47
new file mode 100644
... ...
@@ -0,0 +1,51 @@
0
+// +build !windows
1
+
2
+package fsutil
3
+
4
+import (
5
+	"os"
6
+	"syscall"
7
+
8
+	"github.com/pkg/errors"
9
+	"github.com/stevvooe/continuity/sysx"
10
+)
11
+
12
+func rewriteMetadata(p string, stat *Stat) error {
13
+	for key, value := range stat.Xattrs {
14
+		sysx.Setxattr(p, key, value, 0)
15
+	}
16
+
17
+	if err := os.Lchown(p, int(stat.Uid), int(stat.Gid)); err != nil {
18
+		return errors.Wrapf(err, "failed to lchown %s", p)
19
+	}
20
+
21
+	if os.FileMode(stat.Mode)&os.ModeSymlink == 0 {
22
+		if err := os.Chmod(p, os.FileMode(stat.Mode)); err != nil {
23
+			return errors.Wrapf(err, "failed to chown %s", p)
24
+		}
25
+	}
26
+
27
+	if err := chtimes(p, stat.ModTime); err != nil {
28
+		return errors.Wrapf(err, "failed to chtimes %s", p)
29
+	}
30
+
31
+	return nil
32
+}
33
+
34
+// handleTarTypeBlockCharFifo is an OS-specific helper function used by
35
+// createTarFile to handle the following types of header: Block; Char; Fifo
36
+func handleTarTypeBlockCharFifo(path string, stat *Stat) error {
37
+	mode := uint32(stat.Mode & 07777)
38
+	if os.FileMode(stat.Mode)&os.ModeCharDevice != 0 {
39
+		mode |= syscall.S_IFCHR
40
+	} else if os.FileMode(stat.Mode)&os.ModeNamedPipe != 0 {
41
+		mode |= syscall.S_IFIFO
42
+	} else {
43
+		mode |= syscall.S_IFBLK
44
+	}
45
+
46
+	if err := syscall.Mknod(path, mode, int(mkdev(stat.Devmajor, stat.Devminor))); err != nil {
47
+		return err
48
+	}
49
+	return nil
50
+}
... ...
@@ -1,5 +1,3 @@
1
-// +build linux windows
2
-
3 1
 package fsutil
4 2
 
5 3
 import (
... ...
@@ -12,29 +10,45 @@ import (
12 12
 	"golang.org/x/sync/errgroup"
13 13
 )
14 14
 
15
-func Receive(ctx context.Context, conn Stream, dest string, notifyHashed ChangeFunc) error {
15
+type ReceiveOpt struct {
16
+	NotifyHashed  ChangeFunc
17
+	ContentHasher ContentHasher
18
+	ProgressCb    func(int, bool)
19
+	Merge         bool
20
+	Filter        FilterFunc
21
+}
22
+
23
+func Receive(ctx context.Context, conn Stream, dest string, opt ReceiveOpt) error {
16 24
 	ctx, cancel := context.WithCancel(context.Background())
17 25
 	defer cancel()
18 26
 
19 27
 	r := &receiver{
20
-		conn:         &syncStream{Stream: conn},
21
-		dest:         dest,
22
-		files:        make(map[string]uint32),
23
-		pipes:        make(map[uint32]io.WriteCloser),
24
-		notifyHashed: notifyHashed,
28
+		conn:          &syncStream{Stream: conn},
29
+		dest:          dest,
30
+		files:         make(map[string]uint32),
31
+		pipes:         make(map[uint32]io.WriteCloser),
32
+		notifyHashed:  opt.NotifyHashed,
33
+		contentHasher: opt.ContentHasher,
34
+		progressCb:    opt.ProgressCb,
35
+		merge:         opt.Merge,
36
+		filter:        opt.Filter,
25 37
 	}
26 38
 	return r.run(ctx)
27 39
 }
28 40
 
29 41
 type receiver struct {
30
-	dest    string
31
-	conn    Stream
32
-	files   map[string]uint32
33
-	pipes   map[uint32]io.WriteCloser
34
-	mu      sync.RWMutex
35
-	muPipes sync.RWMutex
42
+	dest       string
43
+	conn       Stream
44
+	files      map[string]uint32
45
+	pipes      map[uint32]io.WriteCloser
46
+	mu         sync.RWMutex
47
+	muPipes    sync.RWMutex
48
+	progressCb func(int, bool)
49
+	merge      bool
50
+	filter     FilterFunc
36 51
 
37 52
 	notifyHashed   ChangeFunc
53
+	contentHasher  ContentHasher
38 54
 	orderValidator Validator
39 55
 	hlValidator    Hardlinks
40 56
 }
... ...
@@ -81,8 +95,10 @@ func (r *receiver) run(ctx context.Context) error {
81 81
 	g, ctx := errgroup.WithContext(ctx)
82 82
 
83 83
 	dw, err := NewDiskWriter(ctx, r.dest, DiskWriterOpt{
84
-		AsyncDataCb: r.asyncDataFunc,
85
-		NotifyCb:    r.notifyHashed,
84
+		AsyncDataCb:   r.asyncDataFunc,
85
+		NotifyCb:      r.notifyHashed,
86
+		ContentHasher: r.contentHasher,
87
+		Filter:        r.filter,
86 88
 	})
87 89
 	if err != nil {
88 90
 		return err
... ...
@@ -91,7 +107,11 @@ func (r *receiver) run(ctx context.Context) error {
91 91
 	w := newDynamicWalker()
92 92
 
93 93
 	g.Go(func() error {
94
-		err := doubleWalkDiff(ctx, dw.HandleChange, GetWalkerFn(r.dest), w.fill)
94
+		destWalker := emptyWalker
95
+		if !r.merge {
96
+			destWalker = GetWalkerFn(r.dest)
97
+		}
98
+		err := doubleWalkDiff(ctx, dw.HandleChange, destWalker, w.fill)
95 99
 		if err != nil {
96 100
 			return err
97 101
 		}
... ...
@@ -105,12 +125,23 @@ func (r *receiver) run(ctx context.Context) error {
105 105
 	g.Go(func() error {
106 106
 		var i uint32 = 0
107 107
 
108
+		size := 0
109
+		if r.progressCb != nil {
110
+			defer func() {
111
+				r.progressCb(size, true)
112
+			}()
113
+		}
108 114
 		var p Packet
109 115
 		for {
110 116
 			p = Packet{Data: p.Data[:0]}
111 117
 			if err := r.conn.RecvMsg(&p); err != nil {
112 118
 				return err
113 119
 			}
120
+			if r.progressCb != nil {
121
+				size += p.Size()
122
+				r.progressCb(size, false)
123
+			}
124
+
114 125
 			switch p.Type {
115 126
 			case PACKET_STAT:
116 127
 				if p.Stat == nil {
117 128
deleted file mode 100644
... ...
@@ -1,14 +0,0 @@
1
-// +build !linux,!windows
2
-
3
-package fsutil
4
-
5
-import (
6
-	"runtime"
7
-
8
-	"github.com/pkg/errors"
9
-	"golang.org/x/net/context"
10
-)
11
-
12
-func Receive(ctx context.Context, conn Stream, dest string, notifyHashed ChangeFunc) error {
13
-	return errors.Errorf("receive is unsupported in %s", runtime.GOOS)
14
-}
... ...
@@ -2,7 +2,8 @@ package fsutil
2 2
 
3 3
 import (
4 4
 	"os"
5
-	"path/filepath"
5
+	"path"
6
+	"runtime"
6 7
 	"sort"
7 8
 	"strings"
8 9
 
... ...
@@ -26,14 +27,17 @@ func (v *Validator) HandleChange(kind ChangeKind, p string, fi os.FileInfo, err
26 26
 	if v.parentDirs == nil {
27 27
 		v.parentDirs = make([]parent, 1, 10)
28 28
 	}
29
-	if p != filepath.Clean(p) {
29
+	if runtime.GOOS == "windows" {
30
+		p = strings.Replace(p, "\\", "", -1)
31
+	}
32
+	if p != path.Clean(p) {
30 33
 		return errors.Errorf("invalid unclean path %s", p)
31 34
 	}
32
-	if filepath.IsAbs(p) {
35
+	if path.IsAbs(p) {
33 36
 		return errors.Errorf("abolute path %s not allowed", p)
34 37
 	}
35
-	dir := filepath.Dir(p)
36
-	base := filepath.Base(p)
38
+	dir := path.Dir(p)
39
+	base := path.Base(p)
37 40
 	if dir == "." {
38 41
 		dir = ""
39 42
 	}
... ...
@@ -51,12 +55,12 @@ func (v *Validator) HandleChange(kind ChangeKind, p string, fi os.FileInfo, err
51 51
 	}
52 52
 
53 53
 	if dir != v.parentDirs[len(v.parentDirs)-1].dir || v.parentDirs[i].last >= base {
54
-		return errors.Errorf("changes out of order: %q %q", p, filepath.Join(v.parentDirs[i].dir, v.parentDirs[i].last))
54
+		return errors.Errorf("changes out of order: %q %q", p, path.Join(v.parentDirs[i].dir, v.parentDirs[i].last))
55 55
 	}
56 56
 	v.parentDirs[i].last = base
57 57
 	if kind != ChangeKindDelete && fi.IsDir() {
58 58
 		v.parentDirs = append(v.parentDirs, parent{
59
-			dir:  filepath.Join(dir, base),
59
+			dir:  path.Join(dir, base),
60 60
 			last: "",
61 61
 		})
62 62
 	}
... ...
@@ -13,8 +13,9 @@ import (
13 13
 )
14 14
 
15 15
 type WalkOpt struct {
16
-	IncludePaths    []string // todo: remove?
16
+	IncludePatterns []string
17 17
 	ExcludePatterns []string
18
+	Map             func(*Stat) bool
18 19
 }
19 20
 
20 21
 func Walk(ctx context.Context, p string, opt *WalkOpt, fn filepath.WalkFunc) error {
... ...
@@ -57,9 +58,9 @@ func Walk(ctx context.Context, p string, opt *WalkOpt, fn filepath.WalkFunc) err
57 57
 		}
58 58
 
59 59
 		if opt != nil {
60
-			if opt.IncludePaths != nil {
60
+			if opt.IncludePatterns != nil {
61 61
 				matched := false
62
-				for _, p := range opt.IncludePaths {
62
+				for _, p := range opt.IncludePatterns {
63 63
 					if m, _ := filepath.Match(p, path); m {
64 64
 						matched = true
65 65
 						break
... ...
@@ -138,7 +139,12 @@ func Walk(ctx context.Context, p string, opt *WalkOpt, fn filepath.WalkFunc) err
138 138
 		case <-ctx.Done():
139 139
 			return ctx.Err()
140 140
 		default:
141
-			if err := fn(path, &StatInfo{stat}, nil); err != nil {
141
+			if opt != nil && opt.Map != nil {
142
+				if allowed := opt.Map(stat); !allowed {
143
+					return nil
144
+				}
145
+			}
146
+			if err := fn(stat.Path, &StatInfo{stat}, nil); err != nil {
142 147
 				return err
143 148
 			}
144 149
 		}