Archive

Posts Tagged ‘webdav’

ownCloud Chunking NG Part 3: Incremental Syncing

November 13, 2015 2 comments

This is the third and final part of a little blog series about a new chunking algorithm that we discussed in ownCloud. You might be interested to read the first two parts ownCloud Chunking NG and Announcing an Upload as well.

This part makes a couple of ideas how the new chunking could be useful with a future feature of incremental sync (also called delta sync) in ownCloud.

In preparartion of delta sync the server could provide another new WebDAV route: remote.php/dav/blocks.

For each file, remote.php/dav/blocks/file-id exists as long as the server has valid checksums for blocks of the file which is identified by its unique file id.

A successful reply to remote.php/dav/blocks/file-id returns an JSON formatted data block with byte ranges and the respective checksums (and the checksum type) over the data blocks for the file. The client can use that information to calculate the blocks of data that has changed and thus needs to be uploaded.

If a file was changed on the server and as a result the checksums are not longer valid, access to remote.php/blocks/file-id is returning the 404 "not found" return code. The client needs to be able to handle missing checksum information at any time.

The server gets the checksums of file blocks along the upload of the chunks from the client. There is no obligation of the server to calculate the checksums of data blocks that came in other than through the clients, yet it can if there is capacity.

To implement incremental sync, the following high level processing could be implemented:

  1. The client downloads the blocklist of the file: GET remote.php/dav/blocks/file-id
  2. If GET succeeded: Client computes the local blocklist and computes changes
  3. If GET failed: All blocks of the file have to be uploaded.
  4. Client sends request MKCOL /uploads/transfer-id as described in an earlier part of the blog.
  5. For blocks that have changed: PUT data to /uploads/transfer-id/part-no
  6. For blocks that have NOT changed: COPY /blocks/file-id/block-no /uploads/transfer-id/part-no
  7. If all blocks are handled by either being uploaded or copied: Client sends MOVE /uploads/transfer-id /path/to/target-file to finalize the upload.

This would be an extension to the previously described upload of complete files. The PROPFIND semantic on /uploads/transfer-id remains valid.

Depending on the amount of not changed blocks, this could be a dramatic cut for the data that have to be uploaded. More information has to be collected to find out how much that is.

Note that this is still in the idea- and to-be-discussed state, and not yet an agreed specification for a new chunking algorithm.

Please, as usual, share your feedback with us!

ownCloud Chunking NG

June 22, 2015 15 comments

Recently Thomas and me met in person and thought about an alternative approach to bring our big file chunking to the next level. “Big file chunking” is ownClouds algorithm to upload huge files to ownCloud with clients.

This is the first of three little blog posts in which we want to present the idea and get your feedback. This is for open discussion, nothing is set in stone so far.

What is the downside of the current approach? Well, the current algorithm needs a lot of distributed knowledge between server and client to work: The naming scheme of the part files, semi secret headers, implicit knowledge. In addition to that, due to the character of the algorithm the server code is too much spread over the whole code base which makes maintaining difficult.

This situation could be improved with the following approach.

To handle chunked uploads, there will be a new WebDAV route, called remote.php/uploads.
All uploads of files larger than the chunk size will go through this route.

In a nutshell, an upload of a big file will happen as parts to a directory under that new route. The client creates it through the new route. This initiates a new upload. If the directory could be created successfully, the client starts to upload chunks of the original file into that directory. The sequence of the chunks is set by the names of the chunk files created in the directory. Once all chunks are uploaded, the client submits a MOVE request the renames the chunk upload directory to the target file.

Here is a pseudo code description of the sequence:

1. Client creates an upload directory with a self choosen name (ideally a numeric upload id):

MKCOL remote.php/uploads/upload-id

2. Client sends a chunk:

PUT remote.php/uploads/upload-id/chunk-id

3. Client repeats 2. until all chunks have successfully been uploaded
4. Client finalizes the upload:

MOVE remote.php/uploads/upload-id /path/to/target-file

5. The MOVE sends the ETag that is supposed to be overwritten in the request header to server. Server returns new ETag and FileID as reply headers of the MOVE.

During the upload, client can retrieve the current state of the upload by a PROPFIND request on the upload directory. The result will be a listing of all chunks that are already available on the server with metadata such as mtime, checksum and size.

If the server decides to remove an upload, ie. because it hasn’t been active for a time, it is free to remove the entire upload directory and return status 404 if a client tries to upload to. Also, a client is allowed to remove the entire upload directory to cancel an upload.

An upload is finalized by the MOVE request. Note that it’s a MOVE of a directory on a single file. This operation is not supported in normal file systems, but we think in this case, it has a nice well descriptive meaning. A MOVE is known as an atomic and fast operation, and that way it should be implemented by the server.

Also note that only with the final MOVE the upload operation is associated with the final destination file. We think that this approach already is a great improvement, because there is always a clear state of the upload with no secret knowledge hidden in the process.

In the next blog I will discuss an extension to this that adds more features to the process.

What do you think so far? Your feedback is appreciated, best on the ownCloud devel mailinglist!

DAV Torture

September 27, 2013 9 comments

Currently we speak a lot about performance of the ownCloud WebDAV server. Speaking with a computer programmer about performance is like speaking with a doctor about pain. It needs to be qualified, the pain, and also the performance concerns.

To do a step into that direction, here is a little script collection for you to play with if you like: the DAV torture collection. We started it quite some time ago but never really introduced it. It is still very rough.

What it does

The first idea is that we need a reproducable set of files to test the server with. We don’t want to send around huge tarballs with files, so Danimo invented two perl scripts called torture_gen_layout.pl and torture_create_files.pl. With torture_gen_layout.pl one can create a file that contains the layout of the test file tree, a so called layout( or .lay)-file. The .lay-file describes the test file tree completely, with names, structure and size.

torture_gen_layout.pl takes the .lay-file and really creates the file tree on a machine. The cool thing about is that we can commit on a .lay-file as our standard test tree and just pass a file around with a couple of kbytes size that describes
the tree.

Now that there is a standard file tree to test with, I wrote a little script called dav_torture.pl. It copies the whole tree described by a .lay file and created on the local file system to an ownCloud WebDAV server using PUT requests. Along with that, it produces performance relevant output.

Try it

Download the tarball and unpack it, or clone it from github.

After having installed a couple of perl deps (probably only modules Data::Random::WordList, HTTP::DAV, HTTP::Request::Common are not in perl’s core) you should be able to run the scripts from within the directory.

First, you need to create a config file. For that, copy t1.cfg.in to t1.cfg (don’t ask about the name) and edit it. For this example, we only need user, passwd and url to access ownCloud. Be careful with the syntax, it gets sourced into a perl script.

Now, create the local reference tree with a .lay-file which I put into the tarball:

./torture_create_files.pl small.lay tree

This command will build the file tree described by small.lay into the directory called tree.

Now, you can already treat your server: Call

./dav_torture.pl small.lay tree

This will perform PUT commands to the WebDAV server and output some useful information.
It also appends to two files results.dat and puts.tsv. results.dat just logs the results of subseqent call. The tsv file is the data file for the html file index.html in the same directory. That opened in a browser gives a curve over the average transmission rate of all subsequent runs of dav_torture.pl (You have run dav_torture.pl a couple of times to make that visible). The dav_torture.pl script can now be hooked into our Jenkins CI and performed after every server checkin. The resulting curve must never raise 🙂

To create your own .lay-file, open torture_gen_layout.pl and play with the variables on top of the script. Simply call the script and redirect into a file to create a .lay-file.

All this is pretty experimental, but I thought it will help us to get to a more objective discussion about performance. I wanted to open this up in a pretty early stage because I am hoping that this might be interesting for somebody of you: Treat your own server, create interesting .lay files or improve the script set (testing plain PUTs is rather boring) or the result html presentation.

What do you think?

Categories: FOSS, ownCloud Tags: , , ,