Re: telling quote
I'm guessing that the available namespace for private networks is now reduced to rude words. This could go do well in some workplaces.
25 posts • joined 13 Jun 2007
I'm guessing that the available namespace for private networks is now reduced to rude words. This could go do well in some workplaces.
Wow. I'd very happy to have either copper or fibre.
My NBN future (guessing at least the next 10 years) will be wireless delivery. I'm really looking forward to that like a good toothache! Of course at the moment I'm stuck on ADSL 1 unless I switch over to BigPong so maybe I shouldn't complain too much. Friends who have ADSL2 in the region tell me that they are going to be moved off that to wireless in the long term.
A contact doing nbn installs suggests that they are really not very interested in anything other that wireless because it avoids playing in pits.
I'm not sure where they would be bothering to install this stuff. It might just be Malcom Turmbull's place.
"As an example of how TCP congestion control can get in the way of network performance, the paper cites a broadcast of two packets to multiple receivers:"
I think I see a problem here... (hint for non-network people: TCP is very strictly point-to-point not broadcast).
In fairness I couldn't find the word broadcast in the original paper, on the story.
"In response to the Heartbleed debacle, a group of NetBSD developers created an OpenSSL fork called LibreSSL."
Actually, that's OpenBSD not NetBSD. OpenBSD forked from NetBSD a long time ago. They have a bit of a history doing this.
He's right. Eventually the iPad will be marginalised.
Something else will be the next big thing and by then Microsoft might have a competitive tablet OS and no one will care.
If Microsoft wants to survive they need to work out what the next big market will be and start working towards that. They also need to shake the belief that the answer to everything is Windows. It may be that no one will want to buy Windows for Underpants.
The iPad really is crap in an enterprise environment and there may be a few bucks to be made building something better for that market. Unfortunately there wont be big money in it, just a few crumbs for the companies still hanging around in that space.
But not GiB.
They've only partly gone over to the dark side.
I apologise. It looked like a late announcement for the existing units. I look forward to reading the new units when they become available.
Um, sorry about your slow news sources. ICA11 was published in 2011.
This is the second year that we (a TAFE in regional NSW) have been using these units.
"*No right to hold a patent unless the holder actually uses it."
So I presume in your grand plans if a company were to design processors but not manufacture them, then they shouldn't be able to license others to do the manufacturing (i.e. make money off their design work).
Seems to me that many companies have a valid reason to patent things but not manufacture them. Perhaps the test should be whether they are actively trying to entice others to license the designs.
I presume the installer is still incapable of working if you're behind a proxy. When I've tried to install it on a work machine, the little installer would immediately die because it was incapable of navigating a proxy server (presumably to keep the installer very small). The only option has been to try to find the download that the installer downloads and bring it down manually. A task that google appeared to definitely discourage.
A lot of work and enough to make me think that it isn't a good fit in a business environment.
Then again, I gave up trying back at about version 3 or so.
So, information about the vulnerability has been published, microsoft have been made aware of it, and some time later (guessing > 0 days) we will have exploits in the wild.
How on earth is this then a 0-day vulnerability?
For an "Open Source" project there seems to be a pretty big emphasis on binaries. I suppose the source code is there if you look very hard but certainly not on the downloads page.
Shouldn't this be classed as open binaries?
> looks easier on the eye due to being optimised for low resolution screens
That would be except for the dialog boxes that are larger than the screen. How many tab keys to you type blind before hitting space and hoping you got the OK button and not the cancel button? Its fun to guess (often 2 but 3 needed on network manager) but definitely not easier on the eye or optimised for low resolution.
Robert'); Drop Table Students; --
Just a little point in favour of IE6.
The old adobe SVG browser plugin worked with IE6 and gave reasonable results for embedded SVGs in web pages. As I recall, when adobe dropped support for their plugin ("all reasonable browsers have native SVG support built in") some years ago, IE7 and IE8 didn't exist and therefore don't work.
Embedded SVG was a good way to crash IE7 in some quite entertaining ways. I haven't tried it with IE8.
I think microsoft are considering adding SVG support to IE9 or 10, so in the meantime if you must access websites using important internet standards you should either use IE6 and the unsupported plugin or any other browser released in the last 5 years.
I've been running thunderbird for a few years now, mostly because their IMAP support is better than entourage or apple's mail. I use outlook at work because of an exchange server but find that its IMAP support is a bit clunky when I connect it up to other servers.
Web based email always seems like the poor cousin of real email clients. Its something you do when you are forced to, not because you want to.
On a command line my preference is for mutt.
Thunderbird hangs occasionally (mostly when I sleep my laptop while its checking mail) but not so much that I care.
I would happily move to a better email client if one existed. If that was Thunderbird 3 then good. If someone else gets their act together then they will get a convert.
As the developers of mutt said "All mail clients suck. This one just sucks less."
"petaFLOPS per second"
The PS at the end of petaFLOPS stands for Per Second. The additional per second isn't required unless we are dealing with an acceleration (i.e. per second per second). Alternatively you could use "petaFLO per second" but nobody would know what you mean.
Only mildly less annoying than people that drop the final S when there is only of them (e.g. 1 petaFLOP).
"The usability issues are gone on a well-configured OEM installation. eeePC showed that."
Have you ever used the rubbish Xandros install on a eeePC? My wife demanded I fix it within a day of getting one. She is now happily using eeebuntu. I look at eeebuntu and think that it is appalling that many of the dialog boxes are too big to fit on the screen so you have to guess how many times to hit the tab key (to select an unseen OK button rather than the equally hidden Cancel button).
*nothing* gets 10/10
This is meant to be a core market for Linux and they don't get it.
Unfortunately a mathematical proof of correctness may prove that some set of known types of bugs don't exist and it may prove that the program actually matches the specification. What it doesn't prove is that the program is what the customer wanted (i.e. the specification is never complete and will change over time so insisting on it being complete and static is a very good way to get a disappointed customer).
Does proof of correctness result in code that is optimally able to be maintained (oops, sorry - if it starts out life correct then it never needs maintenance does it?).
More importantly, our happy user needs to use this kernel to do some real work so they install a web server on it, along with php, then hire a cheap programmer that has read a book on PHP to write applications for it.
The eventual end user knows nothing about any of this and compromises the integrity of the system by writing down passwords on sticky pieces of paper or surfing pr0n sites that use have bonus cross site request forgeries embedded in them....
It may be nice to have a more robust Kernel but I think the money would be better spent on researching how to fix the real problems that plague computer systems.
So let me get this straight,
"we tested out some XML frameworks and some of them broke". Good, this is nice to know. Now tell me which ones so I can see if I have a problem. Not telling? The CERN advisory has a very short list but if that is the full extent of what they found then its not much. @Fazal Majid says that expat has a problem - OK, that's interesting to me.
"broke things might run other people's code". True. Do any of these top pieces of software break like that or is this just a statement of general principle? I agree with the principle but not all broken software breaks in the same way.
"here is a list of XML parsing software - we haven't tested most of it but it may all be broken". Or not. I'm having a little trouble with this logic. I want a list of what these guys have tested, not a wikipedia entry on XML.
"We have a piece of software that everyone should be using to test their libraries". OK, now I understand what this article is all about - its an advertisement.
In reality most XML parsing software is regularly tested with broken XML. I do it all the time without even trying. A typo here, a misplaced character there, some broken encoding, whatever. And what happens? I get a message telling me that my XML is broken. Just like it should. Now, if the application using the library is too stupid to realise that something is broken and chugs on regardless then bad things might happen, or if the application lets the library stop the program (very unusual in my experience) then we might have a denial of service attack against the application.
Many applications using XML do so with XML that is completely under control of the software or the local user so there isn't likely to be any direct threat. Its only the applications that process XML from untrusted sources that are at risk.
Maybe not everyone is doomed after all.
Roll on DNSSEC aware resolvers and the fraudulent DNS entries for the non-existent domains will result in a local error on the client machine.
Rather than seeing the "Domain Helper" service, users will just see a warning that someone upstream is fraudulently altering their traffic. They will then move to a different ISP to avoid the warning. Eventually the ISPs will work it out or die. Easy.
I was hurriedly removing this from a friends dialup computer and took the opportunity to trace the network traffic while connected to my broadband connection.
What worries me is that it uses 'Cache-Control: no-cache' on its requests. This means they are also causing proxy servers to do more work downloading content. OK, not everyone has a proxy on their home network but I notice that my ISP has a transparent proxy and it must be wrecking their links.
Actually, I saw a paper a while back explaining why IPv6 addresses would run out much sooner than expected. I forget the details but my understanding was that it was caused by stupid administrative practices.
By convention, the bottom 64 bits is made up from a slightly modified version of the MAC address of the network interface, thus every network is automatically provisioned to be able to have every network device in the whole world connected to it at once. This is possibly overkill.
ISPs would give out /48 addresses so you can do your own subnetting (16 bits, 65536 subnets - should be enough, even for me). We are now down to 2**48 possible connections to ISPs.
The addresses available to an ISP are part of an allocation sold to their upstream providers, and so on up the pole. Everyone in the chain needs a sufficiently large allocation of subnets that they wont run out any time in the future.
I think that this sort of thinking is very similar to the old 'give everyone an A class address so everyone will have lots of flexibility' thinking from the dawn of the internet. We all know the mess that caused when more than 125 companies wanted to play.
IPv6 was never designed to have 2**128 devices connected. The fact that it has 128 bit addresses leads some people to draw the wrong conclusions.
I run IPv6 at home with no thanks to my ISP or router vendor. The only advantages at this stage seem to be the swimming turtle at www.kame.net and learning about something that everyone else will be learning in a hurry in a few years time.
I could have misread it but doesn't "first to file" mean we will get a lot more patents for things that are blindingly obvious and in common use just because no one has tried to patent them before? Has breathing been patented or will someone (having read my post) be "first" to file?
It seems to me this only benefits the big companies that can generate patents everytime someone on their payroll has an idea. The rest of us loose out because we don't have the budget to get patents for everything we do - to date we have believed that prior art protected our use of our ideas from subsequent patent applications.
I tried it but it crashes. It looks like it can't handle our local proxy setup (configured through a proxy script and then authenticated with NTLM) so it crashes. The only work around seems to be to not load any web pages - not really a viable option for a web browser. You can't turn off the proxy settings (as someone pointed it it just uses IE's settings and my settings at work are locked done by group policy) so I can't even test it on local content. I think the most remarkable thing about this is the complete lack of feedback channel for me to point his out to apple. I'm happy to regard it as a beta and send back feedback but it seems odd to only want feedback from people that it works properly for.
Anyway, its obvious that this is being rushed out because it (or the webkit component) forms some key component in the new version of iTunes for Vista so they need to get most of it working on windows anyway.