* Posts by Gordan

653 publicly visible posts • joined 15 Oct 2008

Page:

Fear not, Linux admins: There are TOOLS to help you

Gordan

SSH and Mitigating Brute Force Dictionary Attacks

There are reasonbly elegant ways to mitigate SSH brute force attacks that are available out of the box.

For example, if your machine has IP address 10.0.0.1, you could apply iptables rules along the following lines:

iptables -t filter -A INPUT -d 10.0.0.1/32 -i eth1 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name filter_10.0.0.1_22 --rsource

iptables -t filter -A INPUT -d 10.0.0.1/32 -i eth1 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 2 --rttl --name filter_10.0.0.1_22 --rsource -j DROP

iptables -t filter -A INPUT -d 10.0.0.1/32 -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT

This will effectively limit the number of ssh connection attempts for a particular IP to 1/minute which will make brute force dictionary password attacks unfeasible (unless somebody is running a large botnet from which they are brute forcing the attack).

If you are particularly bloody minded and have the TARPIT iptables target patched into your kernel, you could replace "-j DROP" above with "-j TARPIT" for good measure, which will also tie up the attacker's connections on IP stack level while making the attacking process get stuck waiting for a response.

Of course, this doesn't mean it's OK to run with direct root ssh access enabled. :)

You could apply something similar on a leyer further up the networking stack, for example to mitigate brute force attacks on your blog account login:

-A INPUT -d 10.0.0.1/32 -i eth0 -p tcp -m tcp --dport 80 -m string --string "/wp-login.php" --algo bm --to 64 -m recent --set --name filter_10.0.0.1_80 --rsource

-A INPUT -d 10.0.0.1/32 -i eth0 -p tcp -m tcp --dport 80 -m string --string "/wp-login.php" --algo bm --to 64 -m recent --update --seconds 120 --hitcount 3 --rttl --name filter_10.0.0.1_80 --rsource -j DROP

Again, you can replace "-j DROP" with "-j TARPIT" if you have TARPIT patched in.

You can also drop access attempts to known attack targets (which you hopefully don't have publically reachable on your servers):

-A INPUT -d 10.0.0.1/32 -i eth0 -p tcp -m tcp --dport 80 -m string --string "phpmyadmin" --algo bm --to 1024 -j DROP

Or drop access attempts from unmasqueraded penetration testing tools (you'd be amazed how many script kiddies don't bother changing the agent string):

-A INPUT -d 10.0.0.1/32 -i eth0 -p tcp -m tcp --dport 80 -m string --string "ZmEu" --algo bm --to 1024 -j DROP

And in those last two cases, again, you can replace "-j DROP" with "-j TARPIT".

All pretty basic stuff and all the tools required ship in the base distro. It's not the tool you have, it's what you do with it that counts. ;)

CentOS penguins maul Oracle's Linux migration pitch

Gordan

Re: The simple script

Probably similar but a bit more along the lines of:

rpm -Uvh --force oracle-release-$version.x86-64.rpm

rpm -Uvh --force oracle-logos-$version.x86-64.rpm

Raspberry Pi rolls out speed surge Raspbian OS

Gordan

Re: Improvements @Gordon (op)

@James Hughes 1

If it makes so much difference then why do the benchmark results not show it? The numbers do not bear out your statements.

Also it is a comparison between ARMv4 code and ARMv6 code. What about the improvements over the most commonly used ARMv5 code?

Gordan

Improvements

So the only worthwhile improvements are in:

1) MP3 decoding (which R-Pi is plenty fast enough to do already)

2) x264 encoding (let's face it, R-Pi is hardly going to be the platform of choice for video transcoding even with the 37% speed boost)

Quake benchmark is probably the most realistic one to get a measure of the amount of improvement we are likely to see on average, with a hardly earth shattering 8%.

An interesting project, sure, but realistically, a waste of time considering the minimal improvements - except for those wanting to use the R-Pi for a lot of A/V transcoding (which will still be too slow for practical use anyway).

Cleversafe cuddles up to MapReduce, kicks HDFS out of bed

Gordan

Re: As I understand it

@AC:

You have missed the most important point regarding the performance, and that is locality. The idea is that you "map" the task to the node that has the data, rather than move the data since moving large amounts of data is prohibitively expensive compared to processing it locally. By having multiple copies of the data, you can load balance similar tasks that need mapping to the same data efficiently. Multiple copies of the data in Hadoop aren't just about redundancy, they are even more about performance. Any reasonable systems architect would steer well clear of a system that compromises that by making the data remote from every node.

Gordan

Re: must have missed something

Cleversafe effectively provides a distributed network block device with parity RAID style redundancy.

Gordan

"Cleversafe says it stores only one copy of the MapReduce data instead of three, which is cheaper as capacities grow from terabytes to exabytes and on to petabytes."

The order in the units is wrong. Terrabyte is 1000GB, Petabyte is 1000TB, Exabyte is 1000PB.

But typos aside, the performance of Cleversafe is likely to suck, because their storage technology effectively applies network RAID N+M (N data disks + M redundancy disks, M=1 = RAID5, M=2 = RAID6, etc.). Point being that instead of data being local to some number of nodes like in HDFS, the data is remote from all nodes, so there will be a performance hit. Parity RAID (5,6) also has massive performance issues with small-ish writes. (One exception of this is the ZFS implementation which works around the write-read-modify overhead by making the stripe size dynamic, but that's getting a bit off topic.)

But to summarize - the reason HDFS keeps multiple copies of the data is not just redundancy, it is performance (data is local to some nodes, which typically determines which node a task gets "mapped" (as in map-reduce) to. If Cleversafe are saying that their solution to multiple copies of data is to keep the data remote from all nodes (with extra overheads to boot) then that's not very clever at all (even if no less safe).

Valve to raise Steam for Ubuntu

Gordan

Re: Other distros

@RAMChYLD:

Making the game static vs. bundling the shared libraries both suffer from the same degree of bloat. The only way you benefit from shared libraries is if you are linking against what already ships with the distro. Otherwise you might as well make the binary static as far as the memory footprint is concerned.

The middle way would be to auto-detect the library versions that exist on the distribution, use the locally available ones where possible, and only bring your own for the ones that are missing. For extra points, make a yum/apt repository for each of the distro releases and integrate the libraries the system libraries the games need by adding a repository to the distro. Which then means you have to support multiple distribution packaging methods in addition to Steam, instead of just Steam. Somehow I don't see a game distributing system developers to put in that much effort. The only viable options I can see are either fully-static binaries or only supporting distributions with an infrequent update cycle favouring stability over bleeding edge features (e.g. RHEL and Debian, rather than Fedora and Ubuntu).

Gordan

Re: Other distros

@JEDDIAH

If you are running statically linked binaries or you have built your own library-compat packages from scratch, I can believe that. But if you're going to do that, using a packaged distribution isn't really an advantage in terms of time-saving - you might as well roll your own Linux-from-scratch.

Gordan

Re: Other distros

@ Rob Beard:

You are missing the point. If you are targetting a frequent-release distribution, you are effectively setting yourself up for having to support every new release as it comes out. This can require a lot of work and is a waste of development effort that could be better spent elsewhere. Otherwise you're going to have to argue over what is supported and what isn't to a horde of poobuntu fans crying foul because they just upgraded to a new version of the OS and now their games no longer work.

Gordan

Re: Other distros

@fishman:

"RHEL6 is based on 3+ year old software, updated with security patches and bugfixes."

You haven't quantified why this is a bad thing. Stability is generally seen as having an advantage over frequently moving goal posts when it comes to development and support.

Gordan

Re: Other distros

Nowhere nearly as hard and labour intensive as having to re-base it on a newer version of software every 6 months.

Gordan

Re: Other distros

You clearly never tried such things.

The problem is that packages have library dependencies, and if the versions are too different, you end up with un-reconcileable dependency differences between the software you are trying to install and most of the software on your system.

This is why frequent release cycle distributions like Fedora and Ubuntu are not fit for purpose for people who are not prepared to reinstall their system every 6 months. Imagine MS rolling out a completely new version of Windows every 6 months and only supporting each version for 12. That is the level of longevity expected of Fedora. Ubuntu is a little better, but not much. It isn't a viable approach.

One possible workaround is shipping all dependencies with the package itself, or providing a monolithic statically linked standalone package which _should_ work for most people on most distributions. But even then it doesn't always work out due to developer competence (or lack thereof). For example, static Skype 4.0 for Linux still has an external dependency on libtiff.so.4, which it turns out, doesn't exist in any version of Fedora (F17 has libtiff.so.3, F18 has libtiff.so.5).

Gordan

Re: Platform meaningless without games

Not to mention the stuff Loki ported to Linux many years ago, including Descent. Plus all the ID games (Dooms, Quakes) that ran on Linux practically since they were released.

Gordan

Re: Other distros

This is largely not true. Crysis dates back to 2007. That was when I built my last gaming rig. I still have it, unchanged (Intel C2Q, Nvidia G92). It runs Crysis lovely with everything except AA turned up to max on a 1920x1200 screen. I have played through a fair number of games since then and have never felt the frame rate drop below what my eyes can pick up. So your premise that we are expected to upgrade every 6 months or so is very wrong. The only vague reason to upgrade since then is OS related (there are some games, albeit very few, that require DX10), not hardware related.

Gordan

Re: Other distros

This is actually an extremely good point. If they choose to keep up with bleeding edge distributions, they are liable to learn about the lack of sense of following bleeding edge distributions the hard way reasonably quickly. :)

Gordan

Re: Other distros

Most popular _stable_ distribution. Fedora is a perpetual pre-alpha bleeding edge incubator for RHEL (just for an idea of how pre-alpha it is, look at the stabilization period between when a Fedora is released and when RHEL based on it is released (F6->RHEL5, F12->RHEL6). Poobuntu is not much better, due to it's unending pursuit of the bleeding edge.

Gordan

Other distros

A RHEL/CentOS/SL 6 version would be nice. I really am getting extensively fed up with various software of late not providing a working package for the most popular, stable, enterprise grade Linux distribution while they are providing packages for poobuntu. Granted, generally a version for Fedora is provided, but it is insane to expect users to constantly be upgrading their distribution every 6-ish months to keep up with the latest bleeding edge distributions.

Apple rejoins EPEAT green tech cert program

Gordan

Batteries glued to the case?

Does that mean they are no longer going to glue the batteries to the laptop casing in a way that makes them completely non-removable and non-replaceable?

Also, does that mean they are going to stop bonding aluminium to glass and plastic in screen assemblies?

Fusion-io server strokers show off 2.6TB RAM extension

Gordan

Re: Simplifying?

Is the patch for the Linux kernel available?

Gordan

Re: Simplifying?

"The performance is very different because we reorganize writes in a way that swap doesn't (and in some cases, can't)."

A dumb swap disk can't sure, but a decent SSD (with plenty of DRAM cache) can and does. Any sanely designed modern SSD (OK, that may narrow the field down to a precious few, but that's not the point) will do all the writes sequentially for performance reasons (unless there is a pathological situation going on that prevents it, e.g. no spare unmapped blocks are available to do the writes sequentially - highly unlikely on a TRIM capable SSD with reasonably over-provisioned NAND).

Similar optimization can be applied in software, e.g. using Managed Flash (http://www.managedflash.com/index.htm) or on a file system level, e.g. nilfs (http://www.nilfs.org/en/).

I had a quick look through the paper and can't spot any like-for-like comparisons of normal swap vs fake RAM using the same backing device (comparing PCIe connected NAND high end to SATA connected consumer grade NAND is not a reasonable comparison). Can you provide a like-for-like comparison benchmark?

Gordan

Simplifying?

“The ability to optimise key operating system subsystems for flash with tools such as Extended Memory simplifies performance for developers in ways that were out of reach just a couple of years ago,"

Really? Most OS-es of the past 30 years have the swap-to-disk ability built in. What exactly is there to simplify? It is already completely transparent. From what is described, this merely sounds like making big news out of having a 2.6TB PCIe connected SSD for having your swap file on. What is the big news here?

Formspring springs a leak: 28 MILLION passwords reset after raid

Gordan

Upgrading from salted SHA256?

Really? That sounds pretty paranoid. Those hashes are only going to be voulnerable to dictionary attacks on any weak passwords that people used. But weak passwords are perfectly attackable anyway, without having access to the hashed passwords.

SMART's new SSD wrings extra juice from MLC flash

Gordan

Re: "can do 50 full drive writes a day for five years"

There are ways you can minimize flash wear. For example you could first try writing/erasing a block using a tiny current to minimize the wear. Read it and check if the operation stuck. If it hasn't apply a tiny bit more, and check again. This reduces the amount of wear, at the expense of controller complexity.

It is also not clear what degree of overprovisioning this drive applies - it could be that it actually has 2x+ the amount of flash that it exposes to the user. This implicitly both reduces the write amplification (2x over-provisioning means write amplification rate of 1x should be achievable with some clever controller programming) and multiplies the total write endurance on top.

It also doesn't say what the test pattern is. It could be gaining a substantial amount of it's performance through methods such as compression and deduplication being carried out in the disk's formware. Some Sandforce controllers are rumoured to have been doing this for quite a while. The downside of this is that if your data is deduplicated and a block goes bad, you lose all instances of that block. Similarly for compression, losing one block of compressed data actually means losing a greater amount of data. All of this is manageable, but it will negatively affect the expected/rated error rate of the disk.

None of these technologies and approaches are new or revolutionary - it is just an evolution of what we have already had for a long time. But it does buy flash more time until a better, more permanent replacement of the technology is ramped up (e.g. phase change or memristor).

Used software firms win small victory in shrinking on-premises world

Gordan
Thumb Up

Steam/Origin?

Excellent, sensible news, from the world of legalese for once.

How does this affect Steam and Origin users? Does it mean that they will have to make the licenses transferable on a title by title basis?

At the moment, if you intend to keep your games re-sellable you have to set up a separate email account (plenty of free webmail around) and a separate Steam/Origin account based on that, and then give the buyer the whole account set. Of course, this is against the T&C's of Steam/Origin, but hopefully in light of this ruling these T&Cs thus become illegal and unenforceable in the EU (not that they had any way of stopping this sort of thing in the first place).

Does this new ruling mean that Steam/Origin and similar will have to go one step further and explicitly enable unbundling of a license from a user's account to facilitate sale of a licence without the sale of the whole account?

Stanford boosts century-old battery tech

Gordan

How much of an improvement?

OK, so they are boosting the charge/discharge rate by boosting the electrode surface area through use of clever materials to stick nickel and iron particles to. But how much of an improvement is this to the obvious first pass solution to using iron wire wool and nickle plated iron wire wool for the electrodes?

And what about the battery longevity? The key advantage of NiFe batteries is that they are capable of surviving many thousands of recharge cycles, unlike other battery technologies. Does the use of nano-scale electrode wires impact how quickly the batteries wear out? If it does then this technology scores a massive own-goal.

Red Hat Storage Server NAS takes on Lustre, NetApp

Gordan

RH works very well indeed with things like ZFS.

Gordan

Re: Performance?

Having used it extensively) and having contributed GlusterFS patches to make it work as the rootfs for the Open Shared Root project:

http://lists.gnu.org/archive/html/gluster-devel/2009-01/msg00169.html

I can tell you that the performance isn't as good as NFS when used for the same purpose, the network latency being the same in both cases.

Similar discrepancy is apparent when it is used for things like /home.

I'm sure somebody will claim (without fresh comparable benchmark figures) that the performance situation is substantially different than it was back in 2010, but for reference, you might want to look at the figures I produced back then:

http://lists.gnu.org/archive/html/gluster-devel/2010-01/msg00043.html

Specifically for an idea of how much difference being in userspace it makes on the server side you can compare the performance of knfsd vs unfsd (more than double). Then look at the nosedive in performance when you use GlusterFS on both sides of the equation.

GlusterFS is great for large streaming accesses, but if you have an IOPS sensitive load (and virtually all loads are IOPS sensitive), the performance is going to suck pretty badly.

Gordan

Not quite as rosy

GlusterFS is a great product based on a great idea. Just a shame that it has been designed without considering any concept of fencing and split-brain prevention/management. That makes it fundamentally unusable in a safe fashion for a lot of tasks.

Ten... Androids for under 200 quid

Gordan

Re: You really do

Actually - you don't. Quite the opposite. It is generally the more expensive phones made by big name companies like HTC that are the problem. HTC is one of the most rampant GPL violators. Cheap, lesser branded phones are typically quite well supported.

Gordan

Cyanogen?

It would be good in the future to include availability of a CM ROM for Android phones when reviewing them. ROMs that the phones ship with are usually quite bloated and crippled, with the most useful features such as tethering over WiFi/USB removed from the menus. Having an available CM port (or another suitable rooted lightweight firmware with all tethering features available) is a necessary feature for a lot of Android users.

This would also have a positive effect in encouraging people to use phones made by manufacturers that aren't violating GPL and are providing their kernel source code modifications as they are legally obliged to.

HP asks court to force Oracle to obey Itanium contract

Gordan

Re: After the Google lawsuit

@Wensleydale Cheese

Indeed, but MySQL is GPL-ed and there are forks available should Oracle start to not play ball. So far so good.

PostgreSQL is an _awesome_ database that I use quite extensively, better in many ways than MySQL, but MySQL is just more ubiqutous and more importantly, generally regarded as being crap. This is why I was so disappointed with Oracle when I found it doesn't even measure up to MySQL.

Gordan

Re: After the Google lawsuit

"I don't see happy customers either way,"

Considering the:

1) disappointing performance

2) unjustifiable lack of SQL syntax features (e.g. no cross-table updates without painfully slow sub-selects, usually nested ones, braindead explain syntax with output that doesn't even provide index details and row count estimates, etc.)

3) lack of diagnostic functionality such as logging all queries (possible but the performance overhead is massive, output huge and cluttered and needs to be post-processed to be useful, whereas with MySQL it is pretty much free and gives just the useful part of the output)

[...]

I would say that Oracle have no happy customers in the first place. It was only after I had to use Oracle that I found out just what a shining beacon of SQL database excellence MySQL actually is.

Doug Cutting: Hadoop dodged a Microsoft-Oracle stomping

Gordan
WTF?

SPARC?!

"It means you don’t need big, centralised servers like mainframes or SPARC servers; it's a gift to x86 computing."

You cannot be serious. Have you actually checked the performance of SPARC CPUs against x86 in the past decade? The performance of SPARC is beyond embarrasing and has been falling further and further behind since the turn of the century.

Natwest, RBS: When will bank glitch be fixed? Probably not today

Gordan

Proof banks are not systemically important

Maybe this is a good thing in disguise. It shows that banks aren't important enough to prevent them from going bust. We can make do without them. Maybe we should see this as a preparation run for pulling life support plug on the lot of them.

Arts & social-sci students briefly forced to do useful work at Foxconn

Gordan

Sounds like an innovative and progressive way...

... to curb unemployment.

New body to supervise as your NHS file includes more and more stuff

Gordan

64b/1?

What about 27b/6 ?

Samsung 830 SSD: Competition

Gordan

Deliberate Trick Question?

The disk reviewed was, presumably, 256GB. But the options are Mb, Gb and Tb. Since the closest option is out by a factor of 8 (bits vs bytes), where's the "other" option? (I'm not going to argue the size of the byte also being architecture dependant this time, that would probably be a bit too facetious. ;) )

Reloaded Doom 3 shoots onto shelves this autumn

Gordan

MP COOP Mod?

Will the old MP co-op mod still work? It was a pretty ingenious piece of work that made the game spawn a 1-team team-deathmatch game with the campaign as the "map".

Google blocks MP3 rippers from YouTube

Gordan

youtube-dl + mencoder?

Harder to block than just an IP range.

Apple 15in MacBook Pro with Retina Display

Gordan

Re: <3 the screen

The real question is if the interface is standard LVDS. I didn't think vanilla LVDS had enough bandwidth to drive 2880x1800. If it is vanilla LVDS, there is a good chance that the screen would "just fit". You might need to change the inverter, but that's not that big a deal. I'd be tempted to try it, but the panel doesn't appear to be available on it's own in the usual places like aliexpress.com. :(

Gordan

<3 the screen

Has anybody managed to procure the screen separately and get it working in a generic 16:10 15.4" laptop like the Clevo M860TU?

Three touts 'unlimited' Euro data roaming for a fiver a day

Gordan

Dual SIM Phone

Sadly, until the data allowances cover roaming usage, the only sane solution is still a dual SIM phone.

Thankfully, there are plenty of cheap, decent Android phones available with dual SIM capability because of the demand for them in Asian countries where cross-network calls even within the same country can be prohibitively expensive.

Apple introduces 'next generation' MacBook Pro with retina display

Gordan

Re: Let's have screen resolution become a talking point, please

Internal display. T60 with 1440x1050 display can be upgraded to 2048x1536 using the IAQX10N TFT panel. Pretty easy upgrade, too. Just make sure you get a T60 with 1440x1050 screen to start with - otherwise you'll also have to change the backlight inverter. You may also need to re-flash EDID on the new TFT panel to get it exactly right, otherwise some modes might not work.

Gordan

Re: Let's have screen resolution become a talking point, please

Hear, hear.

Could it really be that my aging ThinkPad T60 (2048x1536@15") finally have a worthy replacement?

Ten... Sata 3 SSDs

Gordan

HP SSDs

@Daf, Davidoff, Steve:

If you are looking for SSDs that will actually work in HP disk array enclosures (e.g. MSA70) - most won't (tried it, most, including Intel, OCZ, and others start to error out in seconds), which is, I suspect, why you are specifically looking at HP SSDs.

However - Kingston SSDs (tried with the V100 500GB ones) work just fine. Kingston kindly provided a set for testing to a previous client of mine and we hammered them for days with various flat out loads any which way imaginable and they never skipped a beat. You might want to give those a go - I suspect that if you need a lot of them, the price difference may well be worth a try. It is the only SSD that I tested that passed all our tests in server (Sun/Oracle/HP) grade hardware.

Gordan
WTF?

Lies, Damn Lies, and Bad Benchmarks

I am appallingly disappointed. I would have thought that everybody with a grain of understanding would by now know that sequential I/O tests are meaningless and that random read/write (especially write on SSDs) IOPS are the only meaningful figure for assessing the performance of disks. This should be done with write caching disabled, and the amount of data written in such tests should be at least 512MB of 10x the amount of cache on the disk, whichever is greater (to avoid the disk faking it by lying about commits - which incidentally some SSDs even from reputable manufacturers do with write-caching enabled).

And yet we only get sequential read/write performance figures for these disks.

The second most important figure for a lot of SSDs is power usage. This has also not been measured, nor even the manufacturers' (usually highly questionable) figures provided.

Can this technical oversight please be corrected so that the review is actually meaningful?

RedSleeve does RHEL-ish clone for ARM

Gordan

Re: Raspberry Pi?

Indeed it does run on the Pi. This should get you started on the Pi if you are lucky enough to have received one already:

http://opensource.wrenhill.com/?p=123

Gordan
Happy

Next you'll say...

... you have to _hand_ it to me (in a punny reference to the logo), right? :^)

John Lewis appears to punt Chromebook with Windows 7

Gordan

ChromeOS has it's uses

Seriously - it provided all the kernel patches required for Tegra2 laptops like the Toshiba AC100. Without the ChromeOS kernel patches we wouldn't have a fully working AC100 kernel today (for running normal Linux, of course - who would want to use ChromeOS anyway?). ;-)

Page: