Here is something that might be a little outdated already, but I hope it still adds some interesting thoughts. The rainy Sunday afternoon today finally gives the opportunity to write this little blog.
Recently an ownCloud fork was coming up with a little shiny box with one harddisk, that can be complemented with a Rapsberry Pi and their software, promoting that as your private cloud.
While I like the idea of building a private cloud for everybody (I started to work on ownCloud because of that idea back in the days), I do not think that this example of gear is a good solution for private cloud.
In fact I believe that throwing this kind of implementations on the table is especially unfortunate because if we come up with too many not optimal proposals, we waste the willingness of users to try it. This idea should not target geeks who might be willing to try ideas on and on. The idea of the private cloud needs to target at every computer user who wants to store data safely, but does not want to care about longer than ever necessary. And with them I fear we only have very little chances, if one at all, to introduce them to a private cloud solution before they go back to something that simply works.
Here are some points why I think solutions like the proposed one are not good enough:
That is nothing new: The hardware of the Raspberry Pi was not designed for this kind of usecases. It is simply too weak to drive ownCloud, which is an PHP app plus database server that has some requirements on the servers power. Even with PHP7, which is faster, and the latest revisions of the mini computer, it might look ok in the beginning, but after all the neccessary bells and whistles were added to the installation and data run in, it will turn out that the CPU power is simply not enough. Similar weaknesses are also true for the networking capabilities for example.
A user that finds that out after a couple of weeks after she worked with the system will remain angry and probably go (back) to solutions that we do not fancy.
One Disk Setup
The solution comes as one disk setup: How secure can data be that is on one single hardisk? A seriously engineered solution should at least recommend a way to store the data more securely and/or backup, like on an at homes NAS for example.
That can be done, but requires manual work and might require more network capabilities and CPU power.
Last, but for me the most important point: Having such a box in the private network requires to drill a whole in the firewall, to allow port forwarding. I know, that is nothing unusual for experienced people, and in theory little problem.
But for people who are not so interested, that means they need to click in the interface of their router on a button that they do not understand what it does, and maybe even insert data by following an documentation that they have to believe. (That is not very much different from downloading a script from somewhere letting it do the changes which I would not recommend as well).
Doing mistakes here could potentially have a huge impact for the network behind the router, without that the person who did it even has an understanding for.
Also DynDNS is needed: That is also not a big problem in theory and for geeks, but in practice it is nothing easily done.
With a good solution for private cloud, it should not be necessary to ask for that kind of setups.
Where to go from here?
There should be better ways to solve this problems with ownCloud, and I am sure ownCloud is the right tool to solve that problem. I will share some thought experiments that we were doing some time back to foster discussion on how we can use the Raspberry Pi with ownCloud (because it is a very attractive piece of hardware) and solve the problems.
This will be subject of an upcoming blog here, please stay tuned.
Even though we just had the nice and successful ownCloud Contributor Conference there have quite some ownCloud releases happened recently. I like to draw your attention to this for a moment, because some people seem to fail to see how active the ownCloud community actually is at the moment.
There has been the big enterprise release 9.1 on September 20th, but that of course came along with community releases which are in the focus here.
We had server release 8.0.15, server release 8.1.10, server release 8.2.8 and release 9.0.5. There are maintenance releases for the older major versions, needed to fix bugs on installations that still run on these older versions. We deliver them following this plan.
The latest and greatest server release is release 9.1.1 that has all the hardening that also went into the enterprise releases.
Aside a ton of bugfixes that you find listed in the changelog there have also been interesting changes which drive innovation. To pick just one example: The data fingerprint property. It enables the clients to detect if the server got a backup restored, and saves changes on the clients to conflict files if needed. This is a nice example of solutions which are based on feedback from enterprise customers community running ownCloud, who help with reporting problems and proposing solutions.
Talking about professional usage of ownCloud: Of course also all the server release are available as linux packages for various distributions, for example the ownCloud server 9.1.1 packages. We think that our users should not be forced to deploy from tarballs, which is error prone and not native to Linux, but have the choice to use linux packages through the distributions package management.
There also have been client releases recently: The Android client versions 2.1.1 and 2.1.2 were released with important changes for Android 7 and much more fixes, as well as iOS client versions 3.5.0 and 3.5.1. The desktop client 2.2.4 also got a regular bug fix update (Changelog).
I guess you agree that is a lot of activity shown in the ownCloud project, making sure to get the best ownCloud experience out there for the users, driven by passion for the project and professional usage in focus.
A couple of weeks ago we released another significant milestone of the ownCloud Client, called version 2.2.0, followed by two small maintenance releases. (download). I’d like to highlight some of the new features and the changes that we have made to improve the user experience:
Overlay icons for the various file managers on our three platforms already exist for quite some time, but it has turned out that the performance was not up to the mark for big sync folders. The reason was mainly that too much communication between the file manager plugin and the client was happening. Once asked about the sync state of a single file, the client had to jump through quite some hoops in order to retrieve the required information. That involved not only database access to the sqlite-based sync journal, but also file system interaction to gather file information. Not a big deal if it’s only a few, but if the user syncs huge amounts, these efforts do sum up.
This becomes especially tricky for the propagation of changes upwards the file tree. Imagine there is a sync error happening in the foo/bar/baz/myfile. What should happen is that a warning icon appears on the icon for foo in the file manager, telling that within this directory, a problem exists. The complexity of the existing implementation was already high and adding this extra functionality would have reduced the reliability of the code lower than it already was.
Jocelyn was keen enough to do a refactoring of the underlying code which we call the SocketApi. Starting from the basic assumption that all files are in sync, and the code has just to care for these files that are new or changed, erroneous or ignored or similar, the amount of data to keep is very much reduced, which makes processing way faster.
On the ownCloud server, there are situation where notifications are created which make the user aware of things that happened.
An example are federated shares:
If somebody shares a folder with you, you previously had to acknowledge it through the web interface. This explicit step is a safety net to avoid people sharing tons of Gigabytes of content, filling up your disk.
With 2.2.x, you can acknowledge the share right from the client, saving you the round trip to the web interface to check for new shares.
Keeping an Eye on Word & Friends
Microsoft Word and other office tools are rather hard to deal with in syncing, because they do very strict file locking of the files that are worked on. So strict that the subsequent sync app is not even allowed to open the file, not even for reading. That would be required to be able to sync the file.
As a result the sync client needs to wait until word unlocks the file, and then continue syncing.
For previous version of the client, this was hard to detect and worked only if other changes happened in the same directory where the file in question resides.
With 2.2.0 we added a special watcher that keeps an eye on the office docs Word and friends are blocking. And once the files are unlocked, the watcher starts a sync run to get the files to the server, or down from the server.
Advances on Desktop Sharing
The sharing has been further integrated and received several UX- and bugfixes. There is more feedback when performing actions so you know when your client is waiting for a response from the server. The client now also respect more data returned from the server if you have apps enabled on the server that for example
limit the expiration date.
Further more we better respect the share permissions granted. This means that if
somebody shared a folder without create permissions with you and you want to reshare
this folder in the client you won’t get the option to share with delete permissions. This avoids errors when sharing and is more in line with how the whole ownCloud platform handles re-sharing. We also adjusted the behavior for federated reshares with the server.
Please note to take full advantage of all improvements you will need to run at least
server version 9.0.
Last night I found time to finally install the first release candidate of Volumio 2, my preferred audio player software. This is more exciting than it sounds, because when I read the blogpost last summer that Volumio is going to be completely rewritten, with replacing the base technologies, I was a bit afraid that this will be one of the last bits that we heard from this project. Too many cool projects died after famous last announcements like that.
But not Volumio.
After quite some development time the project released RC1. While there were a few small bugs in a beta, my feelings about the RC1 are really positive. Volumio2 has a very nice and stylish GUI, a great improvement over Volumio1. Album-art is now nicely integrated in the playback pane and and everything is more shiny, even if the general concept is the same as in Volumio1.
I like it because it is only a music player. Very reduced on that, but also very thought through and focussed to fulfill that job perfectly. I just want to find and play music from my collection, quickly and comfortable and with good sound quality. No movies, series, images. Just sound.
About speed: While the scanning of my not too big music collection on a NAS was a bit of a time consuming task in the past, this feels now much faster (maybe thats only because of a faster network between the Raspberry and the NAS?). Searching, browsing and everything works quite fluid on an Raspberry2. And with the Hifiberry DAC for output, the sound quality is more than ok.
This is an release candidate of the first release of the rewritten project, and the quality is already very good. Nevertheless I found a few things that did not work for me or could be improved. That the volume control is not working is probably because of the Hifiberry DAC driver, I remember there was something, but haven’t investigated yet.
There are some things in the GUI that could be looked at again: For example on the Browse page, there is the very well working search field. After entering the search term and Enter, the search result is displayed as a list of songs to select from. I wished that the songs were additionally grouped by albums, which should also be selectable to be pushed to the play queue.
Also it would be great if the Queue would somehow indicate which entry is currently played. I could not spot that.
But these are only minor findings which can easily be addressed later after enhancement requests were posted 🙂
I think Volumio2 is already a great success, even before it was released! You should not hesitate to try it if you love to listen to music!
Thanks for the hard work Volumio-Team!
This is the third and final part of a little blog series about a new chunking algorithm that we discussed in ownCloud. You might be interested to read the first two parts ownCloud Chunking NG and Announcing an Upload as well.
This part makes a couple of ideas how the new chunking could be useful with a future feature of incremental sync (also called delta sync) in ownCloud.
In preparartion of delta sync the server could provide another new WebDAV route: remote.php/dav/blocks.
For each file, remote.php/dav/blocks/file-id exists as long as the server has valid checksums for blocks of the file which is identified by its unique file id.
A successful reply to remote.php/dav/blocks/file-id returns an JSON formatted data block with byte ranges and the respective checksums (and the checksum type) over the data blocks for the file. The client can use that information to calculate the blocks of data that has changed and thus needs to be uploaded.
If a file was changed on the server and as a result the checksums are not longer valid, access to remote.php/blocks/file-id is returning the 404 "not found" return code. The client needs to be able to handle missing checksum information at any time.
The server gets the checksums of file blocks along the upload of the chunks from the client. There is no obligation of the server to calculate the checksums of data blocks that came in other than through the clients, yet it can if there is capacity.
To implement incremental sync, the following high level processing could be implemented:
- The client downloads the blocklist of the file: GET remote.php/dav/blocks/file-id
- If GET succeeded: Client computes the local blocklist and computes changes
- If GET failed: All blocks of the file have to be uploaded.
- Client sends request MKCOL /uploads/transfer-id as described in an earlier part of the blog.
- For blocks that have changed: PUT data to /uploads/transfer-id/part-no
- For blocks that have NOT changed: COPY /blocks/file-id/block-no /uploads/transfer-id/part-no
- If all blocks are handled by either being uploaded or copied: Client sends MOVE /uploads/transfer-id /path/to/target-file to finalize the upload.
This would be an extension to the previously described upload of complete files. The PROPFIND semantic on /uploads/transfer-id remains valid.
Depending on the amount of not changed blocks, this could be a dramatic cut for the data that have to be uploaded. More information has to be collected to find out how much that is.
Note that this is still in the idea- and to-be-discussed state, and not yet an agreed specification for a new chunking algorithm.
Please, as usual, share your feedback with us!
Something I wanted to share is a device I recently built. It is a complete net player for music files, served by a NAS. It’s now finished and “in production” now, so here is my report.
HardwareAmp+ is a high-quality power amplifier that is mounted on the Raspberry mini computer, shortcutting the sub optimal audio system of the raspy. Only loudspeakers need to be connected, and this little combination is a very capable stereo audio system.
On the Raspberry runs a specialised linux distribution with a web based audio player for audio files and web radio. The name of this project is Volumio and it is one of my most favorite projects currently. What I like with it is that it is only a music player, and not a project that tries to manage all kind of media. Volumios user interface is accessible via web browser, and it is very user friendly, modern, pretty and clean.
It works on all size factors (from mobile phone to desktop) and is very easy to handle. Its a web interface to mpd which runs on the raspi for the actual music indexing and playing.The distribution all is running on is an optimized raspbian, with a variety of kernel drivers for audio output devices. Everything is pre configured. Once the volumio image is written to SD-Card, the Raspberry boots up and the device starts to play nicely basically after network and media source was configured through the web interface.
It is impressive how the Volumio project aims for absolute simplicity in use, but also allows to play around with a whole lot of interesting settings. A lot can be, but very little has to be configured.
Bottomline: Volumio is really a interesting project which you wanna try if you’re interested in these things.
I built the housing out of cherry tree from the region here. I got it from friends growing cherries, and they gave a few very nice shelfs. It was sliced and planed to 10mm thickness. The dark inlay is teak which I got from 70’s furniture that was found on bulky waste years ago.
After having everything cut, the cherry wood was glued together, with some internal help construction from plywood. After the sanding and polishing, the box was done.The Raspberry and Hifiberry are mounted on a little construction attached to the metal back cover, together with the speaker connector. The metal cover is tightend with one screw, and the whole electronics can be taken off the wood box by unscrewing it.
At the bottom of the device there is a switching power supply that provides the energy for the player.
How to Operate?
The player is comletely operated from the web interface. The make all additional knobs and stuff superflous, the user even uses the tablet or notebook to adjust the volume. And since the device is never switched off, it does not even have a power button.
I combined it with Denon SC-M39 speakers. The music files come from a consumer NAS in the home network. The Raspberry is very well powerful enough for the software, and the Hifiberry device is surprisingly powerful and clean. The sound is very nice. Clear and fresh in the mid and heights, but still enough base, which never is annoying blurry or drones.
I am very happy about the result of this little project. Hope you like it 🙂
The first part of this little blog series explained the basic operations of chunk file upload as we set it up for discussion. This part goes a bit beyond and talks about an addition to that, called announcing the upload.
With the processing described in the first part of the blog, the upload is done savely and with a clean approach, but it also has some drawbacks.
Most notably the server does not know the target filename of the uploaded file upfront. Also it does not know the final size or mimetype of the target file. That is not a problem in general, but imagine the following situation: A big file should be uploaded, which would exceed the users quota. That would only become an error for the user once all uploads happened, and the final upload directory is going to be moved on the final file name.
To avoid useless file transfers like that or to implement features like a file firewall, it would be good if the server would know these data at start of the upload and stop the upload in case it can not be accepted.
To achieve that, the client creates a file called _meta in /uploads/ before the upload of the chunks starts. The file contains information such as overall size, target file name and other meta information.
The server’s reply to the PUT of the _meta file can be a fail result code and error description to indicate that the upload will not be accepted due to certain server conditions. The client should check the result codes in order to avoid not necessary upload of data volume of which the final MOVE would fail anyway.
This is just a collection of ideas for an improved big file chunking protocol, nothing is decided yet. But now is the time to discuss. We’re looking forward to hearing your input.
The third and last part will describe how this plays into delta sync, which is especially interesting for big files, which are usually chunked.