Imagine a shared storage resource that needs no storage array hardware at all, has no central controller like a Virtual Storage Appliance (VSA), protects your data, and involves little or no system management. It's a product being developed by three graduates of Scotland's University of St Andrews, and if it takes off it might …
Every so often I've googled for something like this, I have a load of SMB's that would be way better off with a Distributed FS, yet upto now they've all been geared towards massive scales and as such unsuitable for a handful of workstations.
Reading the WP now, hopefully there's something in it about edit locking which always seems to be the stumbling point with synced file shares.
Limited use cases
Not a bad idea in some cases - I like the school use case as they are genuinely somewhere with potentially big storage needs and a lot of PCs. What happens though when you need a file after 5 and everyone has gone home and turned their PC off and the one thing you need isn't cached? How do you get that file do you have to turn on the whole schools PCs?
Re: Limited use cases
Leverage Wake on LAN?
However, there will always be cases whre data is lost, temporarily or permanently. A whole lab of PCs shut down while the electricians or decorators work. A set of PCs being unavailable, which exceeds the redundancy in whatever redundant storage technique is in use. So it can't ever be quite as reliable as proper server-room storage with a proper backup strategy.
Re: Limited use cases
"So it can't ever be quite as reliable as proper server-room storage with a proper backup strategy."
It isn't meant to be. It's for a small-biz of 6 workstations in Windows Workgroup mode needing shared storage. Setting up a workgroup share on the "boss's" computer isn't resilient enough. Buying a server with enough data storage is still going to be a single point of failure in an office where they likely can't even properly manage their own workstation failures. If you're reading this website, chances are you already exceed the mental capacity to run this software.
Re: Limited use cases
If they backup to 'the cloud', that could be a tertiary target for retrieving and/or saving data that was changed. This method would allow for great availability, albeit at the cost of performance (...maybe).
What a pity that this wasn't part of what MS Lanmanager was (1.0 on OS/2 (1987?), MS-OS/2 & UNIX 1989, MS Windows For Workgroups 1992 and NT3.1 in 1993).
Interesting stuff. I wonder how it compares to alternatives like zfs and btrfs? I believe btrfs in particular doesn't version everything but you can tag the state of the entire filing system at a particular point in time.
If it works down at the logical volume level, there's no reason to restrict the filestore that runs above it. zfs or btrfs might be good matches (they're supposed to be self-healing, should the worst happen and data be irretrievably lost).
Isn't this what Google do (although on "server" equipment, rathern than end user computers) to keep their storage dynamic and flexible.
That said, I think this is a truely great idea and whilst I would need to know more detail before I would implement in a real life scenario. When the lowest spec desktop seems to be 120GB these days, then Id say about 80% of that is wasted in an Enterprise environment.
And for their next project, they need to look at shared CPU and RAM...
"All data is encrypted, which answers security needs. "
I'm sorry, but that's woefully short on info.
From whom is the data secure? Outsiders? (i.e., shared encryption key across all users) Non-file owners? (i.e., user specific encryption keys)
What about sharing files? Does a secure key-exchange occur to allow the file to be re-encrypted for the other users? Where is the decryption/re-encryption done? If it' on the regular PCs, what's to stop malware sniffing the key? Is there a (secure) key server infrastructure or does the business still need a dedicated secure key server?
Good to see...
...that Silicon Glen is still alive and well, and living on Fifth Avenue, New York City ;-)
Already exists, sort-of.
I briefly looked into Taho-LAFS but didn't get around to try it. Free, but reading around made me feel like it wasn't production-ready yet. Still too much a toy. An interesting toy, certainly, but a toy. Beyond that there are a number of systems that do more or less the same, or that should provide various parts out of which you can make such a thing, some of them free. That site lists a few. But I think these people have correctly figured that there's a product to be made to work, then polished. Whether there's a market for it too, is more than I can say. Would be interesting to know whether this is original work, though.
This is something I've been moaning about for years ("why can't all those unused desktop gigs get put to good use??"). All it needed was some people a lot cleverer than me to work out how...
Not a bad idea
Intriguing concept but rather comlpex and difficult to envision. Raises many questions eg
- how many companies have loads of PCs but can't afford a NAS ? A 52 branch cake shop can afford a NAS. Smaller companies need to run a server anyway and so using it for storage is not an additional expense.
- The tricky issue of backups is dismissed with one word - "cloud". But the article also says potential customers don't like cloud for storage. So why would they like it for backups ?
Re: Not a bad idea
The problem is that disk space is cheap
A small company has 10-20PCs with 200Gb spare on each so 1-2 Tb of distributed capacity. So rather than go and buy a $100 2Tb USB drive (and a spare so they have an offsite backup) they run this?
So the market is a company with 10,000 PCs sitting around, eg banks, who have data that isn't important enough to justify some proper sort of server.
Re: Not a bad idea
The point is that most small business already have enough Disk space for their needs, yes you could buy a cheap NAS or server but without on site IT when it goes down (and it will), no one will do any work, by utilising existing workstations they end up with a system that's x times more resilient than anything else they could afford at a fraction of the cost.
That's the concept anyway, but I can't seem to find the whitepaper on the linked site so I've no idea if they've cracked file locking between nodes, without which this is basically just another dropbox for workgroups.
Windows eh, hope it continues working as everyone's PC get rebooted or rooted. Sheepdog anyone?
Interesting but flawed
I would be reluctant to put any important data on Windows machines as they are more prone to viruses and other malware than anything else.
I'm also surprised at the statements regarding no maintenance - there may be no 'single point of failure' but there could easily be hundreds of potential problem areas (Hard drive, RAM,CPU on each host) that need to be monitored to make sure the system doesn't collapse.
Would you really want to run Google's system (on Windows) in your own office without any onsite IT?
This sounds like a disaster for the target market,
In practice, I just can't believe that this isn't going to be a nightmare to implement and maintain, regardless of what they say.
Particularly considering the target market - people who lack the budget for a $200 standalone NAS box ($300 for a dual bay one with mirrored drives), yet have a network with dozens of underutilized machines. The administrators are probably marginally competent at these shops (if there even is an admin). Think about how they're lacking a really basic feature of any business network (shared storage), yet have enough machines to pull it off with unused space. What sort of site is this? What the hell are they using the computers for? It's either a school (where the life of the systems is considerably lower, since students abuse them and the administrators aren't even marginally competent), or a business that has almost no connection to technology, using computers as POS terminals (like the bakery shop example) just realizing they need to pay more attention (but not money, not even a couple hundred for an easy-to-configure NAS box) to IT, or a company that's dropped the ball on IT bigtime.
And now we expect the underqualified admin to schlep around to every machine (because they probably don't have remote access set up - as I just went over, their network is a regular charlie foxtrot) and set up this distributed storage mess?
Ain't gonna fly. Might work (maybe) on a well-run network, but those all have the money for a real storage system.
This has already been done - maidsafe.net.
The article is, at best, an overview but if the article is taken at face value the claims seem a little disingenuous. Most PCs connected to a network have their own disk and use the network to retrieve relatively small amounts of *extra* stuff.
This app seems to propose all disk activity is over the network. What does that do for bandwidth management? Retrieving content from local disks is usually faster than accessing the same content over the network so unless the SMB decided to invest in massively redundant network capacity surely the network is going to become a huge bottle neck very quickly. If so, then there is a cost, that of replacing the network. So the trade off is between upgrading the hubs, routers and wiring or adding a $30 disk drive (maybe even two) to a PC or a slightly more expense disk (or two) to a server?
"They plan to commercialise software coming from research and development."
So what are the costs for this solution? Unless they give it away free I would guess that the costs outweigh the benefits, even for the 6 workstation office.
Seen this two decades ago !
I remember we had software doing much the same on a network of Macs over 2 decades ago - pre-history for the youngsters. Can't for the life of me remember what on earth it was called after this long.
These were the days when having a hard disk at all was "quite new" and expensive, and most machines came with something like 20 or 40MB (yes kids, mega bytes !) or less.
All this adds is replication (redundancy), de-dupe, etc. The software we were using kept just one copy of each file - but with the supplied utilities you could specify which machine each folder/file was kept on. If that machine goes offline, the files on it were still "present" but not accessible. This meant you could take a machine (not laptop) home and the shared filesystem would still be there - but only the files on that machine (the ones you need if you've set their location beforehand) were accessible.
and here is the rest of the world thinking the pressure point was iops and latency.....
management and security nightmare as well.. (the magic 'encryption' word doesn't cut it)
and I am pretty sure I have seen similar distributed storage models (albeit within the datacentre) before.
still reinvent the wheel with a slightly wider rim and bask in the patent income should market as a "Local Area Storage Cloud" if they are looking for Angel/idiot investors.
- Xmas Round-up Ten top tech toys to interface with a techie’s Christmas stocking
- Xmas Round-up Ghosts of Christmas Past: Ten tech treats from yesteryear
- Exploits no more! Firefox 26 blocks all Java plugins by default
- Google embiggens its fat vid pipe Chromecast with TEN new supported apps
- Review Hey Linux newbie: If you've never had a taste, try perfect Petra ... mmm, smells like Mint 16