Home > FOSS, ownCloud > ownCloud Chunking NG Part 2: Announcing an Upload

ownCloud Chunking NG Part 2: Announcing an Upload

The first part of this little blog series explained the basic operations of chunk file upload as we set it up for discussion. This part goes a bit beyond and talks about an addition to that, called announcing the upload.

With the processing described in the first part of the blog, the upload is done savely and with a clean approach, but it also has some drawbacks.

Most notably the server does not know the target filename of the uploaded file upfront. Also it does not know the final size or mimetype of the target file. That is not a problem in general, but imagine the following situation: A big file should be uploaded, which would exceed the users quota. That would only become an error for the user once all uploads happened, and the final upload directory is going to be moved on the final file name.

To avoid useless file transfers like that or to implement features like a file firewall, it would be good if the server would know these data at start of the upload and stop the upload in case it can not be accepted.

To achieve that, the client creates a file called _meta in /uploads/ before the upload of the chunks starts. The file contains information such as overall size, target file name and other meta information.

The server’s reply to the PUT of the _meta file can be a fail result code and error description to indicate that the upload will not be accepted due to certain server conditions. The client should check the result codes in order to avoid not necessary upload of data volume of which the final MOVE would fail anyway.

This is just a collection of ideas for an improved big file chunking protocol, nothing is decided yet. But now is the time to discuss. We’re looking forward to hearing your input.

The third and last part will describe how this plays into delta sync, which is especially interesting for big files, which are usually chunked.

  1. Martin
    July 14, 2015 at 11:13

    For me as a user everything that matters is the upload speed.

  2. kuba
    July 30, 2015 at 09:22

    I would consider to look if existing standards could be reused to some extent (suggested by Andreas in the previous post) and also, from the design perspective, I like the idea of REST API much more than disguising special functions as WEBDAV commands. After all the function is the same but the design is clearer and does not try to imply too much how the functions are implemented on the server. This is to say that the level of abstraction should be one higher so the server may or may not decide to map this directly onto storage operations as suggested by your proposal (e.g. MKCOL to start a chunked upload transaction).

  3. Hugo
    July 31, 2015 at 19:55

    Have you take a look at the DropBox chunk upload specification (https://www.dropbox.com/developers/core/docs#chunked-upload)?

    From my point of view it looks like cleaner and easier.

    Given the fact that most of the APIs out there are implemented as a REST API, coming with this approach of using WebDAV for everything it can be a stopper for developers who are used to work with REST-based APIs.

  1. July 30, 2015 at 18:38
  2. November 13, 2015 at 20:27

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: