I don't use my pebble for much in the way of internet connected thingies anyway, and as I use SailfishOS I don't need to worry about the app disappearing - there's a native app.
However, there's always: https://blog.jolla.com/watch/
55 posts • joined 9 Nov 2010
I don't use my pebble for much in the way of internet connected thingies anyway, and as I use SailfishOS I don't need to worry about the app disappearing - there's a native app.
However, there's always: https://blog.jolla.com/watch/
It reads like one.
I get portable data volumes. However whatever happened the concept of stateless containers? Wasn't that the dream?
No, don't tell me. You've got a database in a container and you want it to fail-over. My suggestion is to stop looking at containers as a solution to everything.
I think flockerhub is solving a problem that, if people architected properly, oughtn't exist in the first place.
The package system is quite good, and there's community stuff similar to FreeBSD ports or Gentoo portage.
I've used it on ARM devices (cubox-i etc.) and uses systemd. I'm not sure if that's mandatory, but if it is then I feel sorry for anyone using Arch to learn Linux and then having to use systemd.
To avoid that horror, flee to Slackware!
but the reality is that DevOps has been confused with CI/CD by almost all agencies/employers now.
I'm currently looking for a contract - my expertise is Linux infrastructure/operations. Those contracts don't exist any more. Go to a job site of your choice now and search using the keyword 'Linux'. A year or more ago that would have given you mixed results; Linux support, Linux Admin, Operations, Engineers etc.
These days over 90% of the results are 'DevOps Engineer'.
Look at the spec for a DevOps engineer and it'll vary, but it's typically Puppet/Chef/Ansible, Python/Ruby, AWS, Jenkins/Travis etc. The role is generally "Manage the CI/CD toolset, package up the software for deployment, write the deployment scripts, do the deployments."
So DevOps is actually "that bit where dev and ops overlap which no-one else wants to do."
The issue with this is that it doesn't solve the problems that the real DevOps aims to solve. Instead of separate dev and ops silos, you now have dev, ops and devops silos.
The "DevOps" roles also put a lot of emphasis on the candidate being able code (and that's code, not script), so evidently this favours developers that, for reasons either good or bad, have moved from their development career into DevOps. They'll tend not to have (and the employers aren't looking for) deep or broad experience in ops, and therefore won't be aware of the niceties of how to properly apply the stuff they're doing in dev environments to production.
Oh, and for extra hoots, in the past week I've also come across adverts for TechOps, WebOps and NetOps. The future's looking bright!
There's no reason that a container can't have direct hardware access. I think you're mistaking containerisation for virtualisation. It's misleading to call containers a form of virtualisation - they're not running on emulated hardware but rather directly on the host hardware, which is why they can access it.
It's actually best described as a way of bundling software and it's dependencies and running them so that they're isolated from other containers; more like a super-fancy chroot.
And of course you wouldn't put your actual data within a container anyway - there's stuff you can do with data volume containers, but the real question is, what's the point? There's no real benefit to it.
I think the point you made about putting GUI components etc. into containers is the only sensible use of containers when it comes to storage, but just because you've put something in a container it doesn't mean you're committed to updating the container every few days? What makes you think that this is forced on containers any more than it's forced on 'traditional' applications?
So you're suggesting that host servers are monitored? Good idea - I don't think sysadmins ever considered monitoring servers before. That might be a new market!
Docker containers can leave a load of cruft behind; stopped containers that haven't been removed, volumes from removed containers that weren't removed along with the containers and of course container images that aren't being used by any running containers any more.
I think it's worth suggesting that, in the same way that containers should be ephemeral, the cloudy hosts running them ought to be too, so avoid the accumulation of this slough.
If you're on bare metal then your ops people should be putting the normal housekeeping monitors and scripts on the server that they'd run on a 'normal' server.
If your 'devops' doesn't grasp the concept of general server husbandry then I'd rather not buy shares in your slack startup.
I agree with the thrust of the article, but the reality is of course that there aren't as many system 'architect' jobs going as sysadmins or helpdesk. You'll still need some form of IT support; to imagine that all support and administration tasks will be automated is pretty naive.
I don't think it's fair to dismiss readily the people who are happy to work in a support role for their whole career. Why should they aspire to an 'architect' role if they don't want to?
The other issue is that compared to ten years ago the amount of new technologies is astounding. It's hard, no, impossible to keep up to date with them all. So yeah, an architect should have a good idea of what's out there but it's unavoidable that a lot of it will be shallow knowledge; vague impressions even.
So the idea that an architect will come in, look at the issues and splurge out the ideal solution immediately from their vast store of knowledge is a nonsense. Anyone worth the money will spend a while analysing the issues and then a good deal of time research potential solutions and offer up different options to their employer or client.
This is a nonsense. If DevOps is supposed to mean *anything* it's supposed to mean a form of collaboration between dev and ops which allows everyone to do their job as painlessly as possible, be happy and fulfilled in life and churn out crappy software updates on a weekly/daily/hourly basis in order to fix the last crappy software update you did.
You don't have such a thing as a DevOps 'role'. You still work either in dev or ops. It's supposed to be about getting the procedures and toolkits right.
So telling someone that they need to learn how to be a developer and a sysadmin at the same time suggests that whichever paid-per-word pseudo tech journalist drone wrote this "article" knows less about DevOps than the average CTO.
I wouldn't trust many developers I know to write a a B+ tree library in C. But then this shows a flawed mindset; why not use one of the pre-existing B+ tree libraries?
There are different levels of coding with developers too, you see. I know sysadmins that code better than a good proportion of developers.
But can you use it to play Linux games on Windows?
Well about improving Linux - systemd *attempts* to improve Linux desktop/mobile installations. None of the systemd "improvements" make any sense in a server environment. Faster boot times? Fsck a thumb drive?
Have you seen systemd's cron replacement? Yuck! Binary logging is in no way an improvement over text logs. The FreeBSD init system is a much better fit for server systems.
I've got no real issues with systemd on a desktop system aside from the fact that it becomes a dependency of other stuff. How on earth that came about I have no idea. :(
The gesture based stuff has gone a bit downhill, yes. The really annoying thing is the big focus on Android integration rather than getting the native stuff up to scratch.
If I wanted to use Android apps I could, you know, buy an Android phone.
They seem to have spent a lot of development effort on that rather than fixing some long, long-running bugs. Like crappy network management, or buggy IMAP IDLE.
Try ElementaryOS on a old Acer netbook. :) Or if you want to stick to Ubuntu, switch your Desktop manager to something like Enlightenment. I think we should all be well pas the time when you mistake "Linux" (i.e. the kernel, which supports loads of older hardware) for "Ubuntu with Unity".
I imagine these days you might even be able to get a Linux distro with a GUI running on something as underpowered as a RaspberryPi!
Now I'm sure there's a mobile "version of Linux" knocking about some shops somewhere. Now I think of it, I recall there was an amazingly successful crowdfundy thing to produce a tablet for it to run on. Now what's it called again? HoverfishOS? SailpigOS? No, it's gone. If only I was a technology journalist writing about Linux on mobile devices then I'd be bound to have heard of it.
And then there was that other one, netOS? webOP? You know, the one that used to be in phones and tablets in the shops, and is now on TVs and Audi watches.
And that new one just released in India that'll be on phones, tablets and TVs soon. Tizer? Tiger?
I recently bought a Nexus 4 in order to try out other Linuxy-based OSes. Ubuntu easily trounces FirefoxOS, but I found the UI to be a bit of a mess - a half-hearted implementation of gestures, a confusing home screen system with "scopes" that offered difficult-to-get-to home screen personalisations.
No mobile OS around at the moment beats the user experience of webOS. Shame no-one's reviving that for phones and tablets.
I think you've missed the point of Docker - it's not meant to run simultaneous operating systems on the same server. It's not virtualisation - it's a encapsulation and deployment solution. You're not "giving up" high-availability - that's provided by the design of your production infrastructure and software architecture.
Your article is essentially pointless.
It is a shame it doesn't, but I think it's much to difficult to get open-source drivers for ARM SoCs, which is why they've gone for Intel.
Well Maemo/Meego and WebOS had multi-tasking first.
That's why the tablet is Intel rather than ARM - I believe that the intent is all of the hardware will be supported by open-source drivers.
They're gradually open-sourcing other parts of the OS too, but I think there are still a few apps that are closed source. What I'd like to see is more options to use alternative apps as default and for them to speed up the process of getting apps into their app store.
You're normally able to change your nameservers via whichever company you registered your domain with. The bad news is that you probably bought your domains from fasthosts, so until their servers are back up you're not going to be able to change those settings.
You'll also need DNS hosting to switch over to. ZoneEdit.org used to be OK, but It think someone bought them out recently.
The screen isn't matte, but it's not overly reflective. The polarising layers (for the 3D) seem to mitigate reflections.
There is a 32" model - I bought it last week. The journalist here might not have done their research properly. :)
Look for the 32LB650V @ ~£440. It doesn't come with the magic remote - that's an extra £25 from Amazon.
I had a HP Pre3 and I loved webOS on that. It's not quite as good as a smart TV platform, but it's still quite good. The 32" model is a dual-core ARM processor (I don't know the type/speed). It provides reasonable performance, but perhaps not a smooth as higher-end models. The TV still suffers from a ~30s wait from being turned on before you can fire up the smart TV features.
In general I think LG webOS is a pretty good first release, but I'd say some of the apps could certainly be refined (especially the DLNA playback app). I just hope that LG release webOS updates for all TVs on an ongoing basis rather than only releasing new versions of the OS on newer model TVs.
It's very difficult to say. It all depends on what you're on, what you're moving to, what you're migrating and how you're migrating. :)
How big are inboxes?
Is the new mail system on the same network? What bandwidth is available? Is it feasible to use disk-shipping if it's a remote system?
What storage backend does each system use?
If they're both maildir then the task is quite simple - copy all the files to the new system and perhaps script a few tweaks to some of the files.
If you use something like IMAPSync then it's going to take quite a bit longer. A large mailbox over the internet might take hours.
Is it just mail too, or are you going to try and migrate contacts and even calendars? Unless both systems use some kind of standard (CalDAV etc) then it's going to be a manual process - export, convert, import. Even with CalDAV etc. it's going to be fiddly.
So without having anything to go on, I'd say it'll take a few weeks, if not more.
Seafile. Better than OwnCloud in that the synchronisation works and doesn't delete your documents.
Well you've completely missed not only the point of the phone, but evidently also lessons in spelling and grammar.
Jolla have themselves stated that they not in it to compete with Android/Apple. They know they won't become a dominant player. Their goal is simply to sell enough handsets to be a profitable niche market. This phone attracts people who want more choice than simply Android or iOS. You're obviously happy with those two options, so you know what; why don't you not buy one and pretend that the Jolla phone doesn't exist rather than embark on a poorly-written rant about how you, as the technology expert and prophet that you undoubtedly are, don't think it will succeed? Then perhaps revisit the days when you said that Android had no chance at that the market's dominated by Blackberry and Symbian.
They're using the fact that they don't collaborate with the NSA as one of the selling points.
Hmm. Well Ubuntu desktop doesn't come with SSH installed by default. Certainly not LTS anyway.
And as someone else above pointed out, Linux *is* the kernel. You can't bang on about Linux being inherently insecure etc. when talking about an SSH server originally developed for another UNIX variant and which is available for almost every operating system you can name.
On top of that, not every distribution that includes SSH by default will necessarily use OpenSSH (which I assume is the server that was affected).
This was on Tomorrow's World quite a few years back. Nice to see the relentless progress of science continues.
Forging an email is a doddle, and TLS with authentication has nothing to do with making it harder. You can run an MTA on your own box if you honestly can't find some other MTA to send mail through. Even in your mail client you can set your name and email address to be different from your username/password when you do use authentication.
Of course, the recipient might have half decent filters on their mail servers that will check HELO addresses and reverse DNS and all that, but it possibly still won't bounce the message even if it looks suspicious because of all the false positives from badly configured MTAs out there.
The recipient could manually check the headers for a suspicious email, but I doubt there are many people that do that. So a lot of people won't notice that an email's forged unless (like most phishing attempts) the content of the email is obviously not genuine.
Linux containers are a great way to run multiple Linux servers; you avoid the emulation layers that VMs require so the end result is that your container runs faster than a VM on the equivalent hardware. You can do a number of interesting things with the way you set up containers using features of the Linux kernel to allow for really easy cloning of containers and other fun things - e.g. containers on LVM2 or BTRFS.
I've come across Docker before and I can't quite see what it's offering that doesn't already exist when you use LXC intelligently. As far as I can work out, it's just allowing you to create the equivalent of virtual appliances that you can get from places like the VM store. That's fine, but that's just packaging tools and templating - they're not developing LXC itself. From the article it sounds like they're almost claiming that Docker's the only thing that makes containers useful, and that's a bit cheeky.
Incidentally, for those interested in playing with containers, I'd recommend trying them out on Ubuntu, as they've put a lot of effort into making it easy to create and manage containers (especially with using Ubuntu guest containers).
A final thought, I'm not sure why the article was banging on about bringing the containers down to upgrade the kernel. I can't think of that many instances where software development depends on a specific kernel version in the first place, but if that is important then KSplice addresses that particular issue - kernel upgrades without a reboot.
I still use my Pre3. You're right about the hardware - it was outdated even for the time, but I don't have any issues with performance really. The bootup time is ages, but then I rarely need to reboot.
It was designed as a business phone, so the multimedia aspects are iffy. Sound's not great and there's no expandable storage.
The touchstone charging's great though - I've got one at work and one at home and it beats plugging it into cables to charge hands down.
Obviously the selling point is the OS. WebOS gestures and multi-tasking is so effortless and intuitive that when I try using other mobile OSes, they just feel jarring and ugly. WebOS could still be fantastic if they could just update it - new browsers etc. It's a shame that HP killed it dead.
I have a couple of the Philips LivingColors lamps - they connect together so they have the same hue and they're designed to uplight - casting their light on walls. They're great; being able to control the lighting in your environment really is a great way of relaxing in the evening (or perking yourself up). It's highly effective psychologically.
However I wouldn't buy these lightbulbs, certainly not to use in ceiling fittings because downlighting is too direct and not as relaxing. I know you could put them in lamps, but that's still not as effective as proper uplights.
Add to that the inability to control them without a smartphone (or just switch them off and lose their only selling point) and these because expensive gimmicks with no real benefit.
Philips should perhaps create a hub that controls the existing LivingColors range. That seems like a more practical solution to whatever problem this product is trying to solve.
"They'll be inventing hardback and paperback versions of ebooks next."
They have. :( I went to buy an ebook once. It cost £20 because it was 'the hardback version'. Needless to say I didn't buy it.
Actually, it really is an interesting place. Worth a visit if you're on holiday in the south of Spain (don't forget your passport), but avoid the town centre as much as possible. Stick to the nature park which covers most of the side of the rock.
Yes. IMAP is a web protocol. It's a very efficient web protocol. With IMAP idle, supported by several IMAP servers and quite a few clients, you get 'push' email. For example, my phone connects to my Dovecot IMAP server and I get email notifications the moment they hit my inbox. CalDAV etc. are also widely supported on mobile devices. iPhones can do CalDAV. CardDAV is for contacts, it's less widely used but I think the the iPhone does them.
So you *can* do essentially the same thing using open protocols. I can prove this by doing essentially the same thing with open protocols.
Sounds similar to SAM Coupe BASIC, which was based on Speccy BASIC.
SAM Coupe BASIC was pretty damn good.
The whole point of the Vita is that it's a portable console with proper controls. You can't compare playing Temple Run or Angry Birds to playing Assassin's Creed or LittleBigPlanet on the Vita. Trying to control a game using virtual controls is awful, and I'm sure you can't get joypad peripherals for your phone, but they're not widely supported or are ever likely to be.
Hey. Why are you blaming the systems operators? The blame here goes to the developers and management for not hiring whitehat pen testers. Systems staff rarely get a say in which software they run on their systems.
Well of course you should use the correct tool for the job. I use a GUI on my desktop, and you know what; I often use a GUI file manager to copy and browse files!
However, I find that I can't do that so easily on remote, production servers on account of not running a GUI on them and in fact not installing the libraries I'd need to use to run a GUI on them even if I wanted to. Crazy! So when you're talking about being a *LINUX SYSADMIN*, you perhaps shouldn't be surprised that people flame you for telling people to use a GUI.
Your articles, despite the titles, simply aren't aimed at professional sysadmins and the advice you give for novice/trainee Linux sysadmins won't be applicable for a large number of Linux installations.
Are you joking?
Look up some basic network design and then get back to me when you understand it. You are, like the author of the article, assuming that you have one server and it's directly on the internet. I mentioned things like high availability before, which is one aspect of good network design which is what you do if you're professional. How the hell do you get high availability with a single box whether or not it's got iptables on it?
SELinux isn't new technology; it's at least a decade old. It is is badly designed and poorly documented technology though.
The point I was making, however, is beginner Linux admins ought to turn off SELinux because they'll try and do something simple and it won't work because of SELinux. There are other things they could do which aren't mentioned in this article which will make their systems more secure anyway and won't be such a pain-in-the-arse.
For one, CentOS has a stupid amount of services running as default, most of them ridiculous. If I remember correctly, one is a bluetooth service or something mad like that. The first step to securing a box is to stop unnecessary services. Another is not to run Webmin which, If I remember, has had some pretty nasty vulnerabilities in the past.
There are indeed thousands of Linux boxes directly on the net. In fact my personal server is. But when I say serious, I mean serious as in "Let's hire a sysadmin to look after this" type serious. I would expect at least a screened subnet type network setup for running a serious network system, whether Linux or any other OS. Not only does it aid security, but allows you to move from a server that's a single point of failure to something more highly available.
The conclusion is of course that this article is aimed at hobbyists rather than people employed as a sysadmin, therefore SELinux would be a hindrance.
You do make some good points, but you should point out that this advice is only of practical benefit if you're placing a Linux box directly on the internet without being behind a firewall. I doubt you'll find any serious Linux setup that isn't behind a dedicated firewall.
Tools like ClamAV are designed to scan files going through the Linux system that will end up on other systems - Windows and Macs etc. There are very few viruses and trojans for Linux. If you've updated your system in the past year then you're probably safe against the ones that do exist. However you should really mention tools like Chkrootkit which will actually check for this stuff, or Aide which works as an intrusion detection system.
Incidentally, as a Senior Linux sysadmin with over a decade of commercial experience, I would advise turning SELinux off on your CentOS boxes. It really is more trouble than it's worth. However, Apparmor on Debian/Ubuntu boxes isn't too shabby, so keep that one running.
KSplice is a great product; it allows us to meet PCI (payment card) compliance without rebooting our live servers every time a kernel update is uploaded to the repo. Shame Oracle had to go and buy it though.
Of course, you don't need to install Oracle Linux to use KSplice - it runs on CentOS already, along with Debian, Ubuntu etc.
It's very difficult to try and run a commercial Linux infrastructure these days without running something or other from Oracle (e.g. MySQL, Java etc.), but I can guarantee that if I ever fancied commercial Linux support, Oracle are the last people I'd go to.
RHEL and CentOS are primarily server oriented distros. Ubuntu is primarily a desktop distro and indeed the majority of desktop installations use Ubuntu.
So that's why they're releasing for Ubuntu first, though I'm sure that someone that uses an "enterprise" distro as their desktop will have no trouble getting an Ubuntu package to run on CentOS.
There's no HDMI or indeed no USB MHL. If this had either of those then I would have bought it despite the lack of SD socket. I don't get why you'd have such a powerful media device with little storage and no TV-out, especially if you use this Googly Play film rental thing. Surely people would want to play their films on the TV? *sigh*
I ordered an Hyundai A7HD instead - not nearly as fancy specs-wise, but it has HDMI out and a SD slot and a resonable screen for a few quids less.
MySQL was also hit by this bug. Specifically MySQL on Debian based servers (and Slackware).
Here's a fix (which would have been handy to know yesterday);
date `date +"%m%d%H%M%C%y.%S"`;
This solves the problem.
People seem to have adopted runaway climate change as a religion - that is they get very worked up about something that they don't understand and have no proof for. Try suggesting to the more vociferous types that, no matter how hard we try, human impact on the Earth's climate is never really going to amount to much. The least that will happen is that they will scoff at you for being ignorant.
It's annoyed me so many times that all this climate research never seems to have taken into account the distant past - not just hundreds of thousands of years ago, but millions of years ago when the planet did indeed have a fair amount of CO2. The climate has shifted dramatically, but it seems to bear life still the last time I looked.
I look forward to the believers in runaway human climate change respond to this study with derision. As they surely will.
I don't like Unity. I gave Gnome 3 a go, but even with mods to make it more like Gnome 2 I couldn't bring myself to enjoy it; mostly because of the lack of wobbly windows ;)
As much as I didn't like these two new offerings, I didn't start having a tantrum about how unusable Linux is these days because, and here is a point which people keep overlooking and which I think this article is helping to address, if you don't like it, don't use it.
There are other desktop environments for Linux, and switching between them doesn't mean (as this article suggests) that you need to reinstall your current distro.
A visit to Synaptic or Software centre or a one line command with yum will download a new desktop environment for you and you can log out of the current one and then log into your new environment.
I think that this is what Shuttleworth meant when he talked about 'different ways to skin Ubuntu', though I think he's not being proactive enough in telling people how to avoid Unity if they dislike it so much.
Do you mean, after the revelation. Do you really need to work so hard to unneccesarily hack bits out of a language when there's no reasonable need to? Do you feel like you've saved yourself some time not typing those couple of extra characters? Perhaps time enough to take a relaxing bath or write a novel? In text speak.
It was stated on the HP website that they expected a significant price reduction. I phoned to pre-order one at the beginning of last week (because the site was down) and the chap at the end said they would be reduced and bundled with accessories. I've not been able to get back in contact with them by any means since, and I've not had any communication from them at all despite their website claiming that they've been contacting people.
I seem to be one of the the few people who wanted one of those phones even before the price cut, but sadly I've been beaten to it by hundreds of people who intend to resell them on the ebays for twice the price. *sigh*
Linux the second most popular server OS? Not according to any of the stats I've seen in the past years.
What a lazy, pointless article.