"the system has a more modern and 'flatter' look"
Ubuntu 18.10 is about to make its scheduled appearance: Cosmic Cuttlefish will take centre stage from previous incumbent, Bionic Beaver. Looking back, version 18.04 of Ubuntu Linux was a shock to some and featured a desktop not to everyone’s taste, being the first Long Term Support (LTS) release of the distribution without the …
That caught me eye too, but the first question I have is: with KDE* available, why does anyone use Gnome?
I mean it's the default desktop in so many popular distros that there must be a good reason, but I can't for the life of me work out what it is.
* Or Budgie, or Cinnamon, or XFCE etc etc...
I came to the comments to weigh in on this same topic. I don't get the drift to flat interfaces. 5-year-old graphics chips are perfectly capable of rendering 3-D graphics. I prefer icons with some depth; the appearance is visually pleasing. So who decided that we all need flat interfaces? Is this change simply for change's sake?
> Is this change simply for change's sake?
You've answered your own question. About a great many things.
It was a sad day when some Linux developers decided they were "artists" instead of "engineers". The gnome lot are probably the worst, though systemd takes the special mention prize for zealotry.
" I don't get the drift to flat interfaces"
It's like MS Windows with a chest wrap. Or a mastectomy. Whichever.
Some millenial "children" out there [and their enabling oldsters] *FELT* (not 'think' but 'feel') that THEY should impose a Windows 2.x interface upon THE REST OF US, with their "It's *OUR* turn now" twist upon it, because they *FELT* (not 'think' but 'feel') that it was somehow "better" because in the early noughties, this is all that SMART PHONE INTERFACES could manage. So, in the realm of "all devices are phones" and "all OS's must ACT like phones", they cram this 2D FLATSO FLUGLY eldritch abomination (read: zombie resurrection of the windows 2.x interface) entirely up the rest of our backsides JUST to make themselves *FEEL* better about being important or something...
The UI, on any operating system, is something people are reluctant to change. We still have, with slight differences elements that have been here since Windows on MS-DOS. Look at how minimise/maximise/close has and hasn't changed.
I have seen ideas for UI changes which might be improvements, but the struggle to overcome all those decades of habit made them more like failures.
@Dave Bell : Look at how minimise/maximise/close has and hasn't changed.
hum ... dunno ... I removed to minimize buttons, as I use "present windows" instead, and replaced it with a "keep above" which allows me to have a window visible and on top even if typing in another one. Or take "roll-up" and "roll-down" when rolling the mouse in the title-bar, like MacOS-9 did in it's time.
Windows and MacOS don't propose these very useful UI features, only KDE (Kwin) does.
"I have seen ideas for UI changes which might be improvements, but the struggle to overcome all those decades of habit made them more like failures"
For users, any UI change has to provide an amount of benefit that exceeds the cost changing work habits (and that cost is rather high). If a UI change provides a benefit that doesn't reach that threshold, then it doesn't just seem like a failure, it is a failure.
For me, i don't consider kde an option ever because it wastes so much resources on eye candy, bogging down my system. In a vm kde is unusable too becsuse it is so bloated.
Then again, I don't use gnome either. First thing I do is install xmonad, a non-decorated tiling window manager. I come to my computer to do stuff, not to get dazzled by eye-candy or getting rsi endlessly arranging windows with a mouse.
Multi-monitor support also sucks for both kde and gnome.
@Hstubbe : "Multi-monitor support also sucks for both kde and gnome."
I don't know about gnome (haven't used it for 10 years) but Kwin (KDE Plasma 5.12) is clearly one of the best window/display managers. I have multiple screens, at work, at home, in conferences, HDMI, DisplayPort, DVI, and it all just works. I never need to reboot, uptimes are in weeks.
What needs do you have that Kwin doesn't do ?
When it comes to Multi-monitors, workspace per screen is the only way to go IMO.
Few Desktops have Windowmanagers that do it.
Gnomes workspaces on only once screen default option isn't too bad - a little limited, but you can leave an app on another screen undisturbed while you flip workspaces on the main screen.
Openbox has a patched version that does workspace per screen (works ok with LXTQ, not tried with KDE/Openbox).
Enlightenment does it
But yes, the most functional are mostly the Tiling Window Managers.
When it comes to Multi-monitors, workspace per screen is the only way to go IMO.
Definitely! I've always run that way. When XFCE dropped support for multiple screens a few yeas ago, I ended up switching to running multiple instances of openbox+tint2. It works great 99% of the time, but there is still a bug where openbox occasionally locks up the X11 system. That's annoying but not bad enough to make me give up having multiple screens.
Does a flat file server handle flat file databases?
But seriously, uptimes of months are easy and common in every world but the one with Redmond in it. There are several machines around here that only get rebooted when kernel security updates demand it. They always come back up gracefully.
With that said, anybody who maintains personal uptimes just for the sake of bragging probably deserves what they get. If your system's security/performance/whatever would benefit from a reboot, then reboot the fucking thing already! That particular DSW was over a couple decades ago, and BSD clearly won by a nose, with a properly setup Linux machine coming in a close second, followed by Apple trailing by a couple lengths. Redmond, sadly, DNFed and needed to be put down ... but for some reason was spared by the fanbois. Perhaps it'll be put out to pasture soon, it certainly won't be offered up for stud ...
 Corporate uptimes are a whole 'nuther kettle of worms.
With that said, anybody who maintains personal uptimes just for the sake of bragging probably deserves what they get.
After 18 years of not using Windows, occasionally noticing my uptime is more than a week or two still gives me a little thrill.
I still find it a pain having to reboot everytime a kernel or Graphics driver update (get your shit togetther, Nvidia) forces me to. SystemD promised quicker reboots, but I'm not seeing them, every time, some errant process takes nearly 2 minutes to close, and UEFI insists on polling every single external drive looking for something to boot , despite boot order settings.
I still find it a pain having to reboot everytime a kernel or Graphics driver update (get your shit togetther, Nvidia) forces me to.
Why do you have to reboot for an Nvidia driver update? I've been running nvidia cards for decades, and don't remember ever having to reboot for driver updates.
I do remind myself every couple of weeks (when I've got some spare time) to reboot my desktop boxes just to make sure they are still still bootable. If you wait for six months, then inevitably something will force a reboot right in the middle of some urgent work -- and only then will you find that some update or other that happened during the past months required a configuration adjustment that you forgot. Now you've got to figure out what went wrong while people in manufacturing are twiddling their thumbs waiting for you.
This is a virtualization host, only trusted hosts on it and off-course its firewalled against access unless from a few hosts (and has no Internet access). It runs desktop hardware and consumer SSD's!!
You can't argue with years of service! Its the only currency in the operations business!
"For me, i don't consider kde an option ever because it wastes so much resources on eye candy, bogging down my system. In a vm kde is unusable too becsuse it is so bloated."
What are you on about? KDE uses less CPU and RAM then even XFCE these days, KDE is incredibly lite.
I went directly for Budgie after trying it on a VM. It has a few flaws like not being able to spawn panels on a second monitor, but I don't really care. The looks is nice, it's very light weight and at the end of the day I do my work on the shell or a browser, what's underneath it is almost irrelevant, I could just start my stuff from a shell for that matter.
I used to really love KDE, but it sadly lost its way from version 4 onwards, requiring a graphics chipset with quite some oomph at a time when many computers still didn't have one, making it (for the time) bloated and slow.
On the other hand, early releases of Gnome 3 were indeed horrible, but, now, with a useful search box, an unobtrusive dock, a Mac-like launcher, a Mac-like window thumbnail previewer (you might as well borrow other people's good ideas), and good keyboard control, I find that I do actually quite like it.
In a way, it's oddly amusing that the open source community has managed to make something that is perhaps everything that Windows 8 was trying to be (but failed), and did a far, far better job of it.
My only real complaint would be that there are a number of features that are still very desirable and useful that are relegated to the Tweak Tool rather than being properly available via the main Settings window.
I realise that it is not for everyone, however, but I've never been one to have a desktop full of icons, there's no way I'll ever see any of them behind all of my open windows!
"Modern KDE can indeed be far too resource-heavy, but at least you can configure it to be light."
Can you easily get rid of crap like akonadi being a prerequisite of every thing I'd like to install but don't want a goddamn desktop indexer for?
I used to like KDE a lot. Used to.
"I used to really love KDE, but it sadly lost its way from version 4 onwards, requiring a graphics chipset with quite some oomph at a time when many computers still didn't have one, making it (for the time) bloated and slow."
I keep hearing this, but my reality doesn't match the griping. I installed Slackware 13.0 on a (then) 6 year old laptop in 2009. It came with KDE 4.x. The laptop in question (HP zv5105) has low-end Intel graphics with "shared" 64megs of memory. It ran KDE 4.x quite nicely, and was my primary computer for 7 years.
I wrote: "The laptop in question (HP zv5105) has low-end Intel graphics with "shared" 64megs of memory. It ran KDE 4.x quite nicely, and was my primary computer for 7 years."
Make that an ATI Radeon 9000 IGP w/64Megs of shared memory. It was my primary computer for 7 more years. I just booted it up, and it is running the video software that came with Slackware 14.0 (no ATIdriver). It works fine, I still use it on the road when I'm in a place where it might get lost, stolen or strayed (nobody is going to steal a 15 year old laptop!).
She started on Slack 9.1 and was updated as Slack was updated. The old gal got wiped&re-installed for a couple releases of Slack, after major architecture changes. She's now running Slack 14.0 -stable ... I'm tempted to update to Slack 14.2 -stable, just to see how it goes. KDE has never even stumbled in all that time (aside from issues that everybody was having, of course).
They say the memory is the first to go ... or maybe I forgot it was ATI video because I never thought about it again after the initial installation. Still, not too bad for a 15 year old machine :-)
KDE runs quite well on my Intel Braswell dual core laptop with 4GB of RAM and Intel integrated graphics. It's probably the most underpowered CPU/GPU you can get from Intel right now, but it doesn't have any trouble with KDE. It runs better than Windows 10 did for the very short time it was on there.
"Same reason Linux took off and BSD not so much. Because LibQT was open sourced just a bit too late."
Nah. LibQT became an important part of the equation many years after Linux blew past BSD in the number of developers and/or users column. The real reason is because AT&T's lawyers had issues with BSD. Linux was seen as being unencumbered with perceived AT&T license problems. We needed a cheap-or-free *nix for work, home, and play ... Minix wasn't it (yet ... it's a good general purpose OS now, check it out!), Coherent was about to go away (open sourced in 2005, find it at tuhs), the other commercial offerings were either brain-dead or stupid-expensive for home/student use. So Linux+GNU won by default.
That was the mid-90s; the rest, as they say, is history.
Note that I'm not anti-BSD, far from it ... I've used BSD since roughly the same time that ken got to Berkeley. It's a great OS, and IMO a better server OS than Linux ... although Linux is breathing down BSD's neck in that department.
It was "... windows and controls gain a lighter feel." that caught my eye. I have absolutely no idea what it means, and the rest of the article failed to enlighten me.
Are we talking about "light" as opposed to "dark", here, or "light" as opposed to "heavy"? I'm not sure that either means any more than the other, in this context, anyway.
Yup, Microsoft has had to go flat to lower CPU and GPU loads to help with battery life. If you don't recall, Microsoft Windows on portables has always been a big battery hog compared to all other OS's. So with Microsoft going "flat" that seems to be what some people think is "modern".
Linux distros should always stand up and stand out for what's better. Following Microsoft is never a good thing.
"Microsoft has had to go flat to lower CPU and GPU loads to help with battery life"
From a company that walks over dollars to pick up dimes, then. Even *IF* (and I highly doubt it) that this 2D FLATSO trend 'saves battery life', your computer is now LESS efficient because of ".Not" and UWP, and so whatever CPU efficiency is allegedly "made up for" by the 2D FLATSO FLUGLY has _EASILY_ been UN-DONE by the ".Not" "UWP" gross inefficiencies!!! Just watch the CPU during any application loadup involving ".Not" and you'll see what I mean.
And while running, UWP applications have a tendency to 'spin' the CPU from what I've seen. it's somehow worse when you run it in a VM, worst of all when multiple UWP CRapps fight one another for CPU time while waiting for god knows what interprocess communications they're trying to do, between the 'start thing' cortana and all of that slurping.
Yeah, what a joke, to even REMOTELY try and claim that MS was trying to save on battery life by doing the 2D FLATSO. HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA!
What is the matter, you don't like searching each screen for a minute to find were the the buttons are?
Well, I suppose at least you might be less likely to commit any hasty mistakes.
You should be able to just hit Return for OK/Accept but that ability appears to not be working (at least last time I used Gnome based GTK3 applications.).
I've seen this. A pop-up asks for a response, you type in text, hit return, and for some crazy reason the program responds as if you clicked on a different button that is hidden behind the pop-up. It's the sort of thing that makes you wonder if programmers are human, or some monster which will be revealed in the next episode of Doctor Who.
For many thousands of years, we and our ancestors have been dealing with three dimensional objects in space. Being able to do this well and quickly no doubt often meant the difference between making a meal of a dangerous animal or it making a meal of you. We're well adapted to living in a three dimensional spatial world.
In terms of biology, we're no different than humans who lived and died before personal computers existed. We're still wired to intuitively respond to three dimensional objects. In terms of GUIs, I've said before that our perception of skeuomorphic, non-flat UIs is hardware-accelerated in our brains. Effects like shading and shadows that give the illusion of depth allow us to instantly recognize that a window is distinct from the background because it has a shadow or that buttons are meant to be pressed because they look like actual buttons. Flat interfaces that don't look like anything we're wired to instantly recognize require more cognitive processing for us to figure out what UI elements are actionable, and that takes a person's attention away from the task at hand for a brief moment. It adds up... I think the Reg article some time ago said that the subjects using flat interfaces were 22% slower than those using skeuomorphic ones, according to the study they were reporting upon. A particularly bad flat interface could be still worse if you have to hunt around to see what is an active UI element.
Looking modern or what some people think is attractive (I think flat interfaces are ugly) is not a good reason to make user interfaces (or themes that define their appearance) that are slower and less intuitive. Function is beauty.
> the obvious question is “what’s new?” The answer is… not a whole lot.
But this is true of almost every Linux [ and by "Linux" we all know that means the kernel and the suite of apps that make up a distribution ] - and has been for years.
The question that rarely gets asked and even less frequently gets a satisfactory answer is: what will I be able to do, with this release, that I could not do before?
And most times the answer is "nothing". For many years now, all new Linux releases have been merely rolling the version numbers on libraries and utilities (squashing bugs and fixing security problems), adding support for new hardware and fiddling with the UI.
The only real change that has arrived in recent years is systemd. But even that is 4 years old, is hated as much as it is adopted and makes no difference at all to the users and the list of functions they can use.
One could argue that stability is a major benefit. That being able to take a user from 20 years ago (i.e. me!) and plunk them down in front of a Linux desktop that they will instantly recognise and be able to use, is a good thing. Apart from some minor silliness, like moving the position of menus and toolbars it is totally familiar. This is very true. But it is not innovation, it is not "cutting edge" and it is not what developers want to spend their time doing.
Linux has grown fat and slow in middle age. It is no longer the inspirational "alternative" it once was. It no longer leads in terms of utility or design. Yet it contains all the old baggage that makes it a hostile environment for people to adopt. Just try adding a new package - download this, edit that, compile the other, add new libraries to satisfy installation criteria, fix conflicts and maybe - just maybe - after a full day of effort and Googling user forums that shiny new app will work.
We should be at the stage where all a user has to do is sit at a screen and say (or type) "I want to write a document" (or letter, email, flame, program, magazine review ... ) and everything just happens. And the same applies to hardware - especially stuff you can plug in like USB. None of these should be issues, but they are all insoluble due to group dynamics and office politics within the community.
So Linux will continue to increment version numbers. Giving the illusion of progress without change. And in 20 years time someone else will re-write this comment about Ubuntu 38.10. That is, if the Y2038 problem hasn't destroyed the world.
"it contains all the old baggage that makes it a hostile environment for people to adopt"
To a certain extent, yes it does - if you've come from using Windows to Linux, just try to add a second hard drive for extra storage! It's relatively straightforward once you know what to do, but seeing how good the current crop of distributions are, it would be better if Linux had a built-in utility which could perform all these steps for you?
Not everyone knows how to boot from a pendrive and edit fstab in vi to recover their system.
Biting the hand that feeds IT © 1998–2019