* Posts by Achilleas

12 posts • joined 29 Oct 2012

Xombrero browser replacement

Achilleas

Re: Xombrero browser replacement

+1 for qutebrowser. Though I don't use it exclusively (I mostly use Chromium+Cvim), it's as close to xombrero as I need it to be.

W is for WTF: Google CEO quits, new biz Alphabet takes over

Achilleas

Re: Your new AI robot overlord

See, you *say* "no joke", but I'm really not sure you mean it.

The Breakfast (Table) of Champions: Micro Machines

Achilleas

2 controllers - 4 players

Not sure if I'm remembering this correctly (perhaps it was Micro Machines 2), but I think the simplicity of this game allowed two players to share a controller, allowing 4 player races with only 2 controllers.

To simplify the gameplay in this mode, the cars would autoaccelerate, leaving the players to simply turn left, right or brake. One player would use the D-pad and the other would use the three buttons of the Mega Drive controller.

It sounds stupid describing it now, but I fondly remember it as one of the most fun experiences of my childhood, simply because everything is better when there are 4 people playing a game constantly, on the same sofa, without rotation.

Got a big day planned in 15 billion years? You need this clock

Achilleas
Coat

Milliways reservation

Reading the article, all I can think of is how it's impossible to miss your booking at the Restaurant at the End of the Universe, given how reservations are generally made after you've returned from your meal.

Google crafts neural network to watch over its data centers

Achilleas

Re: This isnt very good

"Nope, I'm afraid you've got the sensitivity analysis bit wrong. The feature space is 19 dimensional. The sensitivity of one parameter (say IT load) depends on the values of the other 18 variables. You cannot just pick one slice through a multi-dimensional space and infer that moving 1 variable (IT load) always gives the same PUE response."

I didn't "get it wrong", I'm not the one who did the analysis. My description of the analysis isn't any different from how you described it, you're just spinning it in a negative tone ("You cannot just pick one slice ..."), while I was simply stating what it *might* be useful for. The interdependence of variables is, of course, an issue, which is why I said it sort of assumes a linear dependence. If the dependence between params a and b is linear, you can expect that parameter a will have a similar effect on the output across the entirety of b's values, and b will just be "scaling" the effect of a (just as a simplistic example of what I meant by assuming linear dependence). Of course this assumption probably doesn't hold, so yes, the analysis isn't that useful. But it's still not part of the validation.

"Graphs 4b and c are total nonsense as the inputs are integers - In reality there are only 2 data points, 0 and 1 - yet the paper talks about a non-linear relationship if you have 0.79 of a cooling tower. Thats nonsensical."

You're right, that is weird. It's probably nonsense, or they're normalising over max-{chillers,cooling towers}. Either way, it's an error on the author's side.

"Also, while we're at it, depending where you are on an exponential curve, it can look pretty straight."

Only if you do some weird scaling on the axes, independently (large scale for vertical combined with small scale for horizontal), but I see your point.

"Where you are on the curve is dependent on the other 18 variables. So changing these can make the 'curve' look straight. Thats why its fundamentally wrong to extrapolate the response from just one set of values for the other 18 variables."

Yes, we covered this. It assumes linearity, etc, etc. I thought we could get over the issues of the sens analysis by just pointing that out.

"The cross validation is wrong too. The data was sampled at 5 min intervals and 30% was used as 'unseen' test data. But the dataset was shuffled chronologically. Looking at the variables and its highly likely that data received every 5 mins will be highly correlated. Removing (on average) every third data point means the test data is very highly correlated with the training data and cannot be said to be independent or unseen. Thats why the prediction rate is so high, relatively speaking there are a LOT of nodes in that network and it is basically overtrained to pieces with test data pretty much the same as the training data."

I'm not sure there's really an issue here. Yes, the validation samples are picked from "in between" the training samples, but how would you do it? There's no temporal training so each sample is assumed independent (as far as temporal sequence is concerned, they're just arbitrary points in time). Any other kind of splitting besides random would induce biases. Of course, there is a temporal dependence between data points (consecutive points on the same day probably have very similar ambient temps, usage loads, etc) and are correlated, but the same can be said for times of day, or period of the year (points from the same time of day across a whole month probably have very similar usage loads, points from the same time-of-year are probably correlated based on loads, temps, etc). Unseen doesn't mean independent. In fact, if your unseen data is completely independent, you're going to have a hard time testing. The goal is generality, not tricking the network into failing.

"The usual way to demonstrate this is to show the test and train data performance over the set of training epochs. The training performance generally gets better and better whilst at some point the test data performance will get worse as the nnet becomes over-specified to the training data. We havent seen this."

Agreed, an overfitting test would have been nice.

"The bit you've missed is to evaluate a neural network (or any form of classifier) you need _representative_ data, not complete data. As I said, once you step out of the data range, yes the nnet is 'extrapolating' and providing a numerical answer - but its just guessing. If the weather is different to the data gathered over 2 years (so very hot or very cold) that system is just guessing. And anyone can do that."

It's a better guess than "anyone" can do, though. I mean, it's data across two years. Assuming there hasn't been any radical climate changes or usage changes to be expected across consecutive periods, then you can probably train on 2 years of the most recent data and keep updating as you go along. Barring any extraordinary events that might make the inputs deviate substantially from their historical values, you're probably going to be within the bounds you've been training with or very slightly outside of them, which a properly trained ANN should be able to handle.

What could be more representative data for testing than random points in time across your data set? It's not "complete" data, it's a subsample of your usage scenario across 2 years.

"The real point of nnets is providing a tool capable of modelling non-linear relationships where you dont have to have a preconceived model of the relationships. If the relationships are linear then an nnet won't outperform a linear classifier."

It might, but it wont be worth the effort.

"Because of the response function in the neuron you always get a smooth transition through the feature space that looks convincing but thats just an artifact of the maths, not the data."

Not sure what you mean by "looks convincing".

"To be honest, the more I look at this, the worse it gets. As I said it isnt very good or convincing and has some really basic mistakes in it. Its basically nnet models training data very well shock."

I'm not as sceptical because of one very important reason: It worked. Not on testing data, or on validation data, but on the real thing *after* it was trained and validated. The sensitivity analysis is a bit broken, I agree, but even with the assumptions it makes, there's no denying that it shows the I-O relationship is highly non-linear (independent of the linear assumption is might be making for the interdependency between inputs), so an ANN is well worth the effort.

Achilleas

Re: This isnt very good

The sensitivity analysis was just an analysis of the impact of individual values based on the (already validated) model. You're right that such an analysis might assume linearity between variables, but it's useful to see whether the impact of singular inputs follows intuition (e.g., higher load -> lower PUE) and you get a sense for the shape of the dependence (albeit in a rather locked down test case). Cross-validation and testing was done on 30% of the data, which wasn't used for training, which is how it's always done.

"Nnets dont magically extrapolate answers and they wont magically interpolate stuff they wont have seen either. Similarly if it hasnt seen all of the combinations of cooling towers running it'll just guess."

If they aren't supposed to extrapolate answers on data they haven't seen then what's the point?

If the data covers the entirety of problem space then you don't need a model, you just need a lookup table. Of course "it'll just guess", that's what ANNs do. They fit a curve to known data, are validated against other data (which we pretend are unknown) and then they're used to predict the output of unseen input cases.

Achilleas

Re: doesn't seem very amazing

Smart systems are about feedback-control loops: take readings from sensors -> adjust system to bring the behaviour closer to the desired situation.

Neural networks are (generally) used as predictive models, i.e., what would be the change in the system if I changed the inputs (by xyz amount)? However, I don't see any reason why a neural net couldn't be used as a controller for a smart system.

The way I see it (though, I claim no expertise on smart systems), when one talks about smart systems, they mean the actual controller (which, again, could very well be a neural net *after* it's undergone training and has been shown to fit the data), but when one talks about neural networks and machine learning, they're almost always talking about the actual learning of the relationship between input and output. In other words, what the ANN provides for us is an automated way of configuring the controller. The point of machine learning is automatically and intelligently figuring out the input-output relationship, while the point of smart systems (as far as I know, anyway) is adjusting inputs in response to deviations in the output from a desired state, i.e., taking advantage of pre-existing knowledge of the I-O relationship to keep outputs at desired states.

Achilleas

Re: doesn't seem very amazing

@Nate and Andy:

(NB: Reading my post after writing it, I realise it might come off a bit condescending or simplistic. I apologise and this was not the intention. I felt like describing how ANNs work was necessary and I have no idea what the average engineer or IT worker knows about them and how they work).

"Learning" is a tricky word when it comes to artificial neural networks (ANNs) and machine learning in general. Of course it's "just programming", neural networks aren't magic and no one's ever claimed they are, but they're not just "a collection of scripts" either.

The way an ANN (or this one in particular) differs from a standard piece of monitoring software is that, when it's very hard to predict which of the inputs has to change (and how much) for the output to change in an intended way, the monitoring software ends up relying on intuition and small searches in the parameter space. Imagine you have 5 inputs, 5 values you can tweak in your system. Each one affects the system in its own way and changing each one has an associated cost. Even if we know how they affect the system relatively (say, inputs a, b, c increase the output value, d and e decrease it), we're still looking at a 5 dimensional space, which is likely non-linear (few things worth investigating are linear) and we're trying to figure out how to spend the minimum cost to change inputs in order to get a desired effect on the output. Alternatively, we just want to be able to predict how a certain subset of the inputs (which are out of our control) affect the output, so we can make an informed decision on how to tweak the rest, or what to expect when certain cases occur.

Now imagine you have a 19 dimensional input space. Where the learning comes in, what it essentially does, is construct the equation (the model) that connects the inputs to the output. It's more akin to a "smart random search" of the entire solution space, but keep in mind that the network built in this case has a (roughly) 10000-dimensional solution space (which seems a bit like overkill to me, but I guess there was a good reason to make the network 5 layers deep).

I'm sure arguments can be made either way about whether gradient descent (the core learning algorithm in most neural nets) constitutes learning or not and I'm sure these arguments have been going on for over 50 years by people much more qualified than me (on both sides) and like the news article already mentioned, there's nothing really fancy or new about the methods used in this instance. But even though the algorithms and methods have been around for decades, they never really caught on in the IT industry in general, so much so that putting this 50+ year old algorithm to use to actually cut costs is newsworthy!

So no, it's not a bunch of scripts and algorithms like standard monitoring software and it's not just an adjustment to new inputs. It's a minimization/optimization on a 10k-dimensional solution space that produces a predictive model that couldn't be done by exhaustive search or intuitive tweaking by hand.

Apple, Beats and fools with money who trust celeb endorsements

Achilleas

Re: @Fihart re snakeoil

I was just about to link this when I found your post (had to scan through the three pages of comments to see if it was already mentioned).

http://www.chord.co.uk/product/chord-sarum-tuned-aray-streaming-cable/

I don't know what the best part about these cables are. That it's an Ethernet cable that increases the quality of audio streaming? That they cost over a £1000/metre? Or that they claim that the cable is *directional*!?

I discovered these products a few months ago and was expecting to find out that they were a very elaborate joke. It seems they're dead serious.

Schoolkids given WORLD'S CHEAPEST TABLETS: Is it really that hard to swallow?

Achilleas

Re: All the world's knowledge

Knowing how to look and where to look and what questions to ask is useless if you don't have access to the information to begin with. Why not take care of the easy part first? Sometimes, it's not about asking questions, it's about getting lost in the information until you figure out what questions there are to ask. I remember one of the first websites I discovered was howstuffworks.com, back when I first got onto the internet. I never "asked" how half the stuff I read about works, I wasn't curious enough to actively seek out the information before I knew I had access to it. Once I did have the access and plenty of spare time (so much free time!), I read that stuff until my eyes bled.

I'm sure you try hard to create a personal journey to understanding for each and every student and that's quite admirable (no sarcasm), but is that even possible when talking to a classroom of 20-40 children? Do even half the teachers you know care enough to do this, or do they just go through the material, throwing the information out into the classroom and expect it to stick? (That's not rhetorical by the way, I'm curious to know). I would never argue that teaching can be outsourced to the web, but it can make this personal journey you speak of a reality for the student who cares enough. The internet came pretty late in my school years and I can't begin to imagine how much easier it might have been if I could look up that one bit I didn't quite understand in class on the web and read 15 different explanations, until I could finally fathom the concept.

Achilleas

"Totally agree with the sentiment of this article."

You most certainly don't.

Valve taps testers for Linux Steam

Achilleas

Re: This is going to be funny

Because obviously Valve decided to do this on a whim and did absolutely no research on the subject.

To be honest though, I don't think Valve is expecting a big increase in sales at all. I really think this is one of those moves that's more about making existing users happy (lots of people have been wanting this for years) rather than attracting new users from a different corner of the market. I believe they are expecting that gaming in general is breaking away from the Windows platform (the number of games for OSX has been increasing steadily lately, indie games from Humble Bundles always work on Linux and OSX) and they're jumping ahead to establish Steam as the standard gaming solution on Linux.

I have Windows installed on my desktop for the sole purpose of playing games. If the games I play the most were available for Linux, I'd drop Windows entirely. I don't think I'm the only person who's like this and Steam for Linux is the first big step in that direction.

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2020