HubiC as NAS disk.
Using HubiC Cloud more advanced way
I wrote earlier this can be done, but im still testing, having alot ppl asking me how this is done, its very complicated thing, im going to post main components needed for doing your own hubic mountpoint on linux. now im not meaning hubicfuse.
I struggled my self over a week to get this working properly. Skilled linux user needed to accomplish this.
Advantages of using hubic as drive/NAS or mountpoint (NOT HUBIFUSE):
- Alot diskspace 10T+ (12,5Tb when referral are full)
- Can use on demand without copying file (stream), smaller files will cache on local system
- Stream 1080p video file
- upload/download works both ways same time
- High performance (my case internet speed was only slowing things 150Mbit/s)
- Can use on ownCloud or other cloud system
- Encryption
- Use any sharing method, like ownCloud, WebDAV, Samba, NFS, you name it!
System preparation:
I have done this on Debian 7 and 8, it work on both systems. But debian 8 is more up to date.
- hubicSwift authentication gateway set up it properly and get it running
- HTTPS server, nginx and apache tested they both work. you need this to gateway work propelly
- SSL certificate for them, self signed is OK
- Domain name or subdomain. maybe static ip will also work. for gateway.
- S3QL self compiled absolute minium version 2.15 do not think lower version on debian 8, it will not work.
- Dependencies: for s3ql v 2.15 all of them, (on debian 7 can use version 1.11 it work but slower)
- Dependencies: Find a way to install needed python modules what os can’t install/conflicts, do it manually if have to.
- HubiC account, free acco will work.
- Manual, rtfm, s3ql read it all. it gonna help you alot. just read it.
Putting all together
After got those installed and working properly you want test it with command swift (install it)
Then if you see own file system goto hubic web page, create folder.
Use mkfs.s3ql to create filesystem there, use big chunk pieces, don’t go over 100M on forums they say it gonna segment over 100M. its unknown what happen when segmentation files is there. I found hubic page says limitation 50 000 files per folder, i told about this to s3ql dev, but he don’t want do anything about hubic (im using swift not hubic direct on his code, but anyway) So at this point someone need be dare enuf to test over 50k files… For now u can create as many filesystems you like.After creating filesystem, mount it. take look on options, read s3ql manuals and use common sense. don’t use default settings. My case example i use 1,7gig cachefile uploathreads 5 and no compression. I have other system as well with low compress and less cache/theard.
At this point you should have it working, test it out upload/download it should work fast, don’t mess up there, let the system complete requests etc while using it. Always FOLLOW LOGFILES while testing! mount.log httpd.log and all related, even syslog good to keep open. put debug modes on if needed.
System resources and speed
S3QL is the biggest eating resources, using compress it will take 100%cpu for sure, but its ok it works. be patient on uploading.
using without compression its ok, it take less, but it takes still alot, its normal. And one more thing, more filesystems you use same time, the more cpu it gonna eat, same as disk speed is needed alot. i prefer SSD it might be better. Just remember watch TOP when working there, if you see 100% cpu use, don’t expect full upload/download speeds, get a faster pc on that case if you need faster.
My bottleneck is cpu power and disk speed, if those was faster i could upload even more files same time to different filesystems.
About speed, im using vps with 150Mbit connection, the speed is capped 150Mbit/150Mbit/s so its working full speed.
Tips about where you can use this
Im using ownCloud, it works very nice, you mount as local system, not as swift server. You need even more speed because caching is involved when using WebDAV. im in contact in ownCloud people and there is some issues about breaking connections etc, but they are all fixable. actually im using on my home with my internet full speed webdav and samba to ownCloud what uses external storage HubiC cloud space..
think about possibilities. The price of hubic is lowest what i was able to find on europe on this transfer speeds, the speed is key for everything!
Im also using a VPS from OVH it is 2014classic model lowest 2,5eur/month one it works good, but today i upgraded one step up cause i was having alot issues on low disk space there, it just was not enough for me. Point is this system works on lowest vps package.
If you decide to use ownCloud there is alot to tweak, recompilation of software, timeouts, directory paths etc non standard things…
One more tip, keep cache folders big or you get disappointed.
I can post later more detailed info with examples of config files if people are interested. Leave a comment if you like this.
Please don’t complain on my english, you can leave this page any time. this is guide to lead ppl who has been struggling get use of hubic. Also my hubic referal link is full, i have already it filled up. Its not the reason of doing this page.
Happy Xmas everyone!
I was sort of dreaming about this and would be really interested in getting a more detailed explanation on how to do it!
Thanks for the news !
there is explained most of components what needed. what more detail you need? 🙂
Hi,
Did you get any issue (certificate error) with mkfs.s3ql on debian 8 ?
How did you solve it ?
(I am using a self cert).
thx
Hi again ,
Can you please copy/paste your mount command line for optimum parameters ?
thx
Nice topic, can you give us more details regarding how to create s3ql filesystem, how you mount it ?
please post command example.
thx in advance
use the s3ql website manual for mounting and creating.
certificate error ? it should not come. but warnind will come if self-signed