Re: This God geezer, he's been a busy lad
Paaah, a real god just sets the initial parameters of the universe, and lets it evolve to include what he wants in it :-) .
137 posts • joined 13 Nov 2009
Paaah, a real god just sets the initial parameters of the universe, and lets it evolve to include what he wants in it :-) .
To be fair, James May (and the rest of Top Gear) went a step further, and actually built a real life rocket and launched it. That was my favourite Top Gear Episode.
I still wish they would have another go at it, they came really close. Although, perhaps not based on the space shuttle launcher (which is not the best rocket design, due to the shuttle having to hang off its back rather than on top like other payloads).
An investigation into running a process in a container, vs running it as a user on the system directly, results in even lower power consumption.
It makes sense, because adding layers of abstraction reduces computational efficiency (more CPU cycles go to the system, vs the computation you want). It is the same reason some people forgo the OS and program the hardware directly, or even develop their own hardware (e.g. FPGAs).
The question is, whether the loss in computational efficiency is worth any benefits in management and automatic scaling out of resources. If you lose 20% of a nodes efficiency due to using virtualisation, but then you make it trivial to scale out to multiple nodes, then for some it is worth it (generally, Compute/electric power is cheaper than a persons time to manage it).
What the research does here, is give some numbers to the options, so people can actually work out whether it makes sense for them to go one way or another. It is useful to those who have to sit down and architect some large-scale infrastructure.
I don't think the US would change at the behest of anyone external, including the EU. I would suspect more threats/arm-twisting/etc... until the EU concedes or waters down their demands.
Perhaps Facebook knows something we don't, or they are just eternally hopeful that things will somehow go back to how they were.
The cynic in me says they know something, multimillion dollar businesses don't sit and wait on a hope. They sit and wait for the tide to turn in their favour because they have information saying that it will happen.
Whether it actually does happen though, we shall see.
"why would anyone keep a zombie in the garage?."
Well, to protect the NAS from looters of course!
Well yeah, if a bunch of people open accounts but end up never using the service, it will swing the average right down to something silly, like 5GB.
I strongly suspect the German government would bail out VW. They bailed out the banks already , despite it being a bad idea economically, and not really benefiting the average person (in fact it harmed them, but that is a story for another time).
I would imagine that given the choice between printing some more Euro's and bailing out VW, or watching as Germany's biggest employer collapses (along with a lot of now disgruntled voters looking to blame someone for their woes, and more people on benefits), politicians will happily spend other peoples money just so the collapse doesn't happen on their watch.
I think Mini-ITX will be around for a long as it provides all the I/O connectors it does, without having everything go through USB.
I tried replacing some of my mini-itx systems with rasbPi's, but it didn't work out (specifically the file server. The early Pi just didn't have the grunt, the newer one does, but the USB kept conking out, giving out I/O errors once in a while, and the wifi card goes AWOL as well).
Media centre didn't work out either, as indexing all my music and videos would cause it to run out of RAM (but I have not tried the newer one, perhaps that will have the power), but for small light services, management of other computers, X terminals/display PC's, really really useful little things.
Not to mention RGB LED controllers (with presence sensing based on bluetooth address, so depending on who enters the room it sets their lighting preferences), and a lot of embedded stuff where a uC would be too fiddly or too restrictive to set up. A lovely little machine really, and a big thanks to the Pi Foundation for making it happen :-)
I personally find the two systems complimentary, and will probably continue to use them both in tamdem going forward :)
Or build community wifi networks, like was done in the 2000's in Eastern Europe and other countries (where broadband was expensive and rare at first). Modern wifi kit is pretty damn fast (a lot faster than the 11/56mbit you'd get from 802.11b/g back then). Sure it might not compete with fibre to the premises, but it should be decent enough for general use. Then just link different local wifi networks with a VPN over standard internet (if people pooled together on the costs, you could get some pretty powerful pipe).
In EE people even made money off community wifi, some became wifi ISPs, and eventually moved into being normal ISPs. How it worked, is you would pay for the hardware (or buy it yourself), you would pay (one-off payment) to be connected to a nearby wifi AP, and then you could use the network unrestricted. After that you could pick an Internet gateway that resides on the network, pay a monthly fee like you normally would for internet, and then just set your default route to what they tell you.
Or build an encrypted overlay over the internet, like the I2P project. So you use the commercial infra, but as it is all encrypted, it is of little use to them for spying. I suspect the response would be to shove all encrypted comms to the lowest priority, but the multitude of businesses, or people working from home, etc... on VPNs would preclude it (I would hope).
If push comes to shove, there are still options, however still worth trying to stop them from their plans.
If I remember my school classics correctly (It has been a long while, so apologies), the Ancient Greeks, when developing democracy, had two rules: One could not be a career politician, or a career orator (lawyer).
In court both sides had to represent themselves, one could ask someone else to represent you, but it must not be in exchange for payment of any kind. Plus said orator could not represent multiple people at once or in quick succession.
As for leaders, a citizen (rather than a slave) would be nominated for election. People would then vote from a list of nominees. This meant 3 core things:
1) The nominee lived in the society and were well integrated, so were relatively down to earth
2) You would have a pool of all people, including those who are not interested in seeking power for powers sake., to nominate from (unlinke now, where politicians are self selected for those who crave the power), and
3) Said nominee would have a business or job unrelated to their stint in politics/government, plus a social standing, that relied on the society continuing to function as well as before, or better
You would also go back to the society after your tenure, and if you did a lousy job it would be remembered by the rest, and would reflect on your social standing and family reputation. As such being chosen as leader was a privilege (if you do very well, it would increase your social standing in society), and a duty as a citizen to do a good job. It was seen as a necessary evil, rather than a career job for life kind of thing, as it is now.
Of course, the system is not without its problems, and you can argue whether it would scale to today's complex, global interconnected political societies, but even back then, thousands of years ago, the developers of democracy saw the inherent danger of politicians and lawyers to the system. Food for thought :-)
Actually, thinking about it, perhaps some really really savvy hackers out there could actually make a microcode virus.
I would imagine it as a two part system (as microcode usually is loaded at boot, and wiped at power off), where the OS is infected, loads up the exploit microcode, which then sits and stays resident, making sure any attempts at OS cleanup fail (probably by just re-infecting it every time).
Essentially the two parts will make sure the other cannot be removed, until someone uses a liveCD to break the cycle and wipe the OS (assuming you could detect it in the first place).
I admit this is just out there, because I have no idea how microcode is structured, whether you can actually write programs in there (or is it just a translation table?) and how much space you have (although you can get 16MB cpu cache nowadays, which could fit the whole of win3.1 requirements with memory to spare), but an interesting idea none the less.
Plus, nowadays x86 CPUs are essentially RISC with a microcode translator on top. One advantage is that if a microcode bug is discovered (a-la FDIV bug), it doesn't render a bunch of CPUs broken. Instead the CPU manufacturer just issues a microcode update, which gets loaded into your CPU at boot, and life goes on as before.
So microcode/CPU errors, are rarely brought to the spotlight as much as before, nowadays people just patch their BIOS or download the new microcode and carry on, some may not even realise that their CPU has been updated.
"Cars also kill birds and bats. Would you suggest taking up a bicycle as an alternative?"
I wouldn't, bicycles kill birds too*. In fact given a stupid/ill enough bird, any moving object will kill it. There is a difference between a moving object (which healthy birds/bats/etc... will notice and try to avoid), and something (like wind turbines) that causes a partial vaccuum to form in their wake, causing lung damage to said bird/bats. They can't see that, so can't avoid it.
Saying that, I think that is more a problem with the massive turbines and solar power stations. On a small, distributed setup, these things are less of a problem. Perhaps the idea then is lots of smaller, local power stations, rather than a few honking great massive ones? With base load supplied by Nuclear ideally (in my world at least)
*hit a bird in the face once on a downhill ride, thankfully I had a helmet with visor, but the bird didn't fare too well.
I keep thinking the best idea is to build more nuclear power plants in the contaminated zone.
I mean, the area is already contaminated, so it can't get much worse, you have all this idle land devoid of population, and (most likely) a country hungry for as much cheap clean energy as possible (including neighbours willing to buy it off you if the price is right). As there was a power plant there before you already have the infrastructure in place for power transmission (although it will most likely require a refurb), and no NIMBYs to protest and strangle the construction to the point where it becomes prohibitively expensive.
Better than building a nuclear power plant in another part of the country that is not contaminated, where even a minor radiation leak would register and cause problems.
I had that happen to a 286 motherboard that I kept in storage (it was new in box, only powered up upon purchase and kept as a spare for a production system). Powered it up a couple of years ago to find the BIOS chip was corrupted, and it could not finish POST.
Still looked brand new from factory though.
I would imagine that even if the apple was in perfect working order, it would not be bootable until the BIOS equivalent was reflashed.
Talk about painting a bullseye on yourself for law enforcement. Sounds like a fall guy for someone bigger, but that is just my opinion (I cannot imagine someone with high level hacking skills would advertise himself so).
Indeed, I too have seen some crazy uptimes on machines, some in the excess of 3000 days (old Sun SPARC hardware was bulletproof, I swear). The thing is, people who can build and run these systems so reliably commend high prices.
To play devils advocate for a minute, the thing about "The cloud". It is not only about the sharing of computing resources, but the sharing of skilled people.
Your average small firm with the local kid as a sysadmin is unlikely to manage these uptimes, nor to afford the people with the skills to do it. However if 100 such SMEs use the cloud, the cloud provider will have the cash around to hire the skilled people, who will then build and maintain a system to a high reliability, and all the SMEs will benefit.
Very large firms usually have enough money on hand to hire these people directly, so they don't need the cloud services (except to reduce capex in building datacentres or buying thousands of machines).
I suspect the future will see a mixed bag of cloud and non cloud. Yes, clouds have outages, but for a SME (especially if tech is not their primary focus) the question is "are those outages less frequent than before, when everything was done in house?". I suspect for many it is. Most firms would rather not have to bother having an IT person on hand at all, let alone run their own server room and manage all that. It is usually not their core business.
People who visit this site have the skills to build and maintain their own systems, so yeah for them the cloud does not bring much to the table (except security concerns, data leakage, and reliance on a third party), however I would say we are not the target market for this technology.
You are right, that the extent of ring involvement is dependant on the OS (it is also dependent on the CPU arch actually). Both Linux and Windows use two rings for kernel and userspace, not sure about the others (I remember hearing that openBSD uses all 4 of the x86 rings, but no idea if that is true).
I will admit, I was looking at this a while ago, when I implemented RDMA over Firewire as a poor mans Infiniband for clustering, but back then it was not possible to access the hardware from userspace, without essentially writing a shim kernel module that would sit and pass the needed data between kernel and user space, and therefore having the overheads I mentioned.
Now, that is Linux specific, however any monolithic kernel design by its nature has to have all userspace stuff go through the kernel. GNU Hurd goes to show that it is possible to have user-space device drivers without the overhead, but the kernel has to be designed for it.
The MMU is a hardware device (nowadays integrated in the CPU die) which handles memory translation on the low level. Not only is it already low latency, both the kernel and the userspace use it (all the time), so there is no difference between user/kernel space in this context. The difference is that userspace goes though an additional layer, the VMM (Virtual memory manager), so each process sees its own virtual address space. Only the kernel (that doesn't use the VMM) sees the real address space, and lacking in the extra indirection that the userspace has to travel, has a lower latency.
"Also, given the right setup (e.g. CPU isolation and pinned execution threads to reserved CPUs) there is absolutely no reason for userspace code to be slower than kernel code (also no reason to be other way around, obviously); it is about cost of context switches (also cache hotness and similar things)"
Yes, the code execution speed kernel and userspace should be the same, after all, code is code, the CPU doesn't care. It is the transition overhead of switching between kernel/user space which slows things down, along with the switch between supervisor/user mode rings (if you are using a userspace device driver for instance, which is why they are less performant than kernel drivers) .
And yes, highly tuned systems can reduce latency, especially if in addition to pinned execution, you also pin interrupts to certain cores ( core 0, so that you don't route through LAPICs on most hardware, but at this level, your workload type and the actual x86 motherboard you buy makes a hell of a difference to latency, as they all route interrupts differently :-) )
I don't think you are correct.
1. A userspace driver has to go through the kernel every time it tries to access the hardware, resulting in a context switch which slows things down compared to direct kernel access
2. A userspace program accessing hardware requires the kernel to drop into (and then out of) supervisor mode each time it does so, these switches in/out of that mode add additional latency compared to a kernel thread, which stays in supervisor mode
3. Userspace code never accesses RAM directly, it does it via the VMM, which itself uses the MMU for translation. The kernel does not use the VMM, so in theory it is a bit faster, but the primary benefit here is being able to directly get/access physical memory addresses, and for things like DMA.
Sure, machines may have gotten so fast that all the above is barely noticeable overhead in general use cases, but it doesn't mean said overhead doesn't exist.
I find that a distinctly odd occurrence. Historically things were put into the kernel because it was faster than userspace. So much faster that it was worth the programming difficulty, potential security holes and risk of locking up the system to do it for some things (excluding things like device drivers, which needed access to raw hardware and had to be in the kernel).
If anyone wanted performance above all else, they used to put it in the kernel. Is it really possible that Linux's network stack has become so inefficient and convoluted, that a userspace stack is actually magnitudes faster? That just sounds nuts. Admittedly the last time I had a good look at the network stack was in Linux 2.4 and early 2.6, so I might be out of date w.r.t the state of the art, but still, things in userspace being faster than kernel space just sounds wrong to me. Am I missing something?
...as the offerings are fast enough for my needs. Give me some more symmetric lines. Some 200Mbit down, 10mbit up is really pointless, as I tend to saturate the outbound before the inbound reaches peak. Plus the ability to run my own cloud/services/etc... is really appealing.
Some are silly, like 50mbit down, 0.9mbit up. I even ask if they can offer something else, and they always say it isn't possible, unless you want to go leased line route. I don't need 1:1 contention, just more balanced down/up.
"The driver's side rear view mirror glass (just the glass oval) is over $800."
Damn... that sounds excessive. What on earth are you driving? And does that price include fitting?
The mirror glass for my cars costs in the range of $60-$250 new (the $250 is for special glass with the heaters and polarization and other fancy stuff).
I don't think I have ever seen $800 mirror glass (excluding fitting. Simple jobs can take a long time on modern cars, so I can imagine $800 if you include parts, labour and taxes).
"Until they become reasonably numerous, at which time governments will start to notice the amount of fuel duty they're losing. Thereafter electric power will be subject to a two-tier taxation regime similar to the one currently applied to diesel."
Nah, they will just start charging you by the miles driven. Works for all vehicles, regardless of method of energy delivery, and the charges can vary based on time of day and location.
Especially as the side effect is that they will have a legitimate reason for why they have to track the vehicle's every movement from when it is first registered on the roads, and for making it a criminal offence to disable tracking.
They would have a complete timestamped record of the vehicle location, speed, etc... and eventually will probably have access to any internal microphones/cames in the cockpit of these overly-computerised cars.
"You have no stereo vision at the range of a car behind you. "
Not technically true. People instinctiely will bob their head about when looking at the mirror. This has the effect of increasing the parallax, allowing people to more accurately judge distances. Most people do this without noticing , but it is somethiing that cannot be done on a 2D screen.
It is also the trick people use with animated gifs, allowing you to perceive a 3D image in a 2D environment. (see here as an example: http://www.maddocman.com/wiggle-3d.htm Not my site, just the first on google search)
"The vision system can have distance measuring (Radar, lidar image processing) and can highlight cars in your blind spot"
So now we are replacing a cheap, simple and reliable system, with two complicated, computerized and expensive systems? And this is considered an improvement?
"If you have the screen anyway then the cost of the camera is negligble and less than the lifetime cost of fuel used by the extra drag."
Where would you put the screen? Most people look left when they want to manuveur left, having the screen in the centre would be worse than before. I would imagine there would need to be two screens (left and right, roughly where people now see their wing mirrors), in addition to whatever screens the rest of the car has.
"Wing mirrors also get broken and aren't replaced until the next MOT (or never over here) they are probably less reliable than cameras"
I disagree with that, I have rarely seen broken wing mirrors, most of them are really sturdy, you really need a lot of force to break them. I think less than 1% of the cars on the road I have seen had broken wing mirrors.
Not to mention a broken wing mirror is easily seen by others, so they can say "Ok, person might not be able to see me on that side of the car, better act accordingly". It acts as a visual sign. There is no way for other drivers to tell if the wing cameras are working or not, including police (who can pull you over if you have a broken wing mirror, at least here in the UK).
"Big advantage on no wing mirrors is reduced noise in the back"
From what I can see, replacing wing mirrors with cameras provides two minor advantages, while giving a boatload of disadvantages.
We are replacing a simple, reliable system, with two expensive, complicated less reliable systems, that will be:
a) more expensive to buy
b) more expensive to repair (coupled with others not being able to tell outside the car, less likely to get fixed unless it becomes an automatic MOT failure)
c) more dangerous (not only due to loss of ability to tell distance of car behind and because others can't tell if you can see them, but also because it is easier for cameras to be blinded by bright lights, or get dirty, or fail)
All for a minor gain in fuel economy, and less noise in the back? I would argue that as cars get more and more complicated, they last shorter periods of time. Pretty much the first things that go on second hand cars are the electrics. Engine/mechanicals are last, usually (unless you bought a pup that was badly treated).
As cars get more and more computerised and interconnected, they become so expensive to repair, that their lives will be shorter than the old cars. Some people already own a car only for the duration of the warranty, then sell it due to the expense (everyone complains about rip-off mechanics though, as if the job is easy and simple on new cars. It is ruddy awful working on new cars).
Cars like these will not last long, and will be scrapped and new ones built more often, becoming more like a consumer good rather than a durable good. This is a huge waste of energy IMO, which dwarfs whatever the fuel consumption improvement you would gain by getting rid of of the wing mirrors (especially as the drag can be reduced by smart aerodynamics, I seem to remember reading that some sports car actually had the wing mirrors improve downforce).
Really, complexity breeds problems and failures. It takes a lot of thinking and effort to make something simple, elegant and functional, along with an understanding the law of diminshing returns when it comes to application of technology.
"The Model X concept cars had cameras instead of door mirrors. This is a very sensible move in terms of fuel economy but unfortunately illegal as the world's car-industry regulation hasn’t caught up with technology."
My understanding is that it has less to do with slow regulation, than to do with the fact that rear cameras are just not as good as mirrors. Specifically you lose the 3D cues humans are used to that allow us to judge distances. Seeing a 2D representation on a screen of what is behind you will never be as good as a mirror (unless they develop a 3D display and camera set up, which will probably be a lot more expensive than a mirror).
Plus you would have to keep the displays running all the time so that you can do the "mirror-signal-maneuver", and there is a lot more that can go wrong with the system compared to a mirror (the only real failure mode is the mirror getting smashed)
Hence despite the fact that the tech is old(*) I don't think it will replace mirrors soon. After all, it hasn't replaced mirrors it in any other modern cars that I know of, especially as this is tech that can be retrofitted to ICE vechicles, and even there improvements in fuel economy are appreciated.
(*) I remember seeing some tuning houses in the 80's demo'ing rear view cameras with fat CRT screens in the dash instead of mirrors, not to mention that the late-90's and early 2000's "pimp my Vauxhall Astra" scene was full of LCDs and rear view cameras as well.
I fully agree with you in the long term (the march of entropy and all that).
On a shorter term though, I would say LED Lighting is an example of a really low-efficiency heater. You would only get ~10% of the input energy out as heat :-)
Full win95 interface for Linux? Like fvwm95?
It was full enough for me to switch over non-techie users from win95 to Linux back in the day. I even used it on my old PDA as well.
Might actually install it now on my work desktop, just to go totally retro in the office (I might still have some of the old win95/98 wallpapers somewhere as well)
"Solar panels would have to be transported by vehicle as well, you know. "
Yes, but only once. You need to keep sending fuel (whether it be hydrogen or diesel) to these remote places, most likely by vehicle of some sort. Yes, you will need to have someone come round and clean the panels, but that is less often than having to restock the fuel. Probably can be done as part of the general maintenance that is already being done on the installed equipment..
"The nice thing with hydrogen is that you can ALSO produce it cleanly if you wish. It's not a fuel SOURCE, it's an energy STORE."
Same thing applies to liquid fuel, with the bonus of being easier to store and transport. Liquid fuels like Alcohol, Petrol, Diesel, etc... are essentially hydrogen fuel with carbon atoms binding it. Hence the term "hydro-carbon" :)
Indeed the first combustion engines ran on renewable fuel. Early internal combustion engines ran on coal gas. Otto's 4-stroke was alcohol fuelled, while Diesels engine ran on veg oil. The only reason we use fossil fuels is because it is cheaper to dig it out the ground then it is to produce it cleanly.
I remember reading somewhere that the raw cost of refined petrol (minus all the taxes, and profits, etc...) would have to hit 95pence/L before it becomes viable to mass produce it(*) cleanly rather than dig it out. Currently the raw cost of pumping and refining oil into petrol is about 2-3pence/L.
(*) by "it" I mean a clean renewable fuel that can burn in petrol engines. It can be a 1-1 replacement like Butanol, or Alcohol fuel (which would need modifications to existing engines).
Well, I see I was wrong in how this all works, so I've withdrawn my original comment. Thanks a lot guys, your posts were all really insightful for me. Upvotes all round! :)
How would they know that you connected it to an aerial then? I mean, they get your details from the retailer, and you just show/tell them you didn't connect it up. What stops you just connecting it up once they leave? It isn't like they can monitor the back of the TV 24/7. If that is true, it seems like a really massive loophole in the whole thing.
I only know because years ago, when I decided to ditch watching TV, I had to deal with the TV licensing guys. Their argument was because I bought a "TV" that had a receiver (i.e. the "capability to receive broadcast TV"), I intended to use it to receive broadcast TV and have to pay the TV licence.
I guess it makes sense, otherwise anybody could just buy a TV, not pay the licence by claiming "Look! I don't intend to watch broadcast TV. I did not hook it up to an antenna" and then just plug it in when the TV licensing guys go away.
In the end, I took a Dremel to my TV's tuner to physically turn it into a dumb monitor (it has spent its entire time hooked up to my PC, or my XBMC setup), which satisfied them, and since then they have not bothered me.
I mean, yes, I could go to court and argue about what I want to do, why I don't need to pay the licence, and the precise meaning of "Intent" in this context, but it was simpler, faster, less stressful and cheaper to just disable the tuner in my TV.
Although I seem to remember them thinking of widening the licensing scope to include any internet connected computer which is capable of watching live streaming of Iplayer. Not sure if that will get through though, we will see.
I guess it depends on who you get from the TV licensing people, and how anal they are. At least this way they can't come back later and say I lied to them. The TV is no longer a TV according to their own definition.
IANAL and all that, just saying what I had to do to get them off my back :-)
Indeed, I've never heard a hypothesis that stated the body that collided with Earth to form the moon was from "far away". In fact the general assumption was that Theia formed very near to earth's orbit, hence the two bodies orbits being able to be perturbed enough to eventually collide.
I remember many years ago watching a BBC program (Horizon?) that pretty much said this much, including stating that moon rocks collected from the Apollo missions pretty much confirmed this. As such I assumed that "A sister planet collided with Earth to form moon" was the prevailing theory we had for a while now.
While more evidence is always great in science, I don't think this is as game changing as the article makes it out to be.
I am an ex-ADSL24 customer. After Coms bought them out it all went to pot. For months the broadband was pitifully slow (we are talking slower than a 56k modem), had constant drop outs, and 2 major outages as well. Fat good it did having an unlimited broadband package when you could never actually use the broadband.
Every time they put the blame on me/my equipment, despite said equipment working for ages with ADSL24 with no problems. Honestly, Coms were as bad as talktalk/sky, from the support perspective. I got the feeling they wanted to get rid of me, based on their behaviour.
Finally moved over to Andrews&Arnold, who are miles better than anyone else I've had (although getting used to the 100Gb cap is proving tricky). Coms still send me demands for payment for services despite me cancelling them months ago, presumably there is a disconnect between accounting and their broadband/telephone guys.
Still, I too am unsurprised that they are losing money, they sure are not going to get anyone to willingly pay them, based on my experience of their behaviour
So girlfriend sends nudie pics to boyfriend, these guys break into Gmail account, pilfer said pics, and post them on a "revenge porn" site. A site that touted that all their pics/etc... were authentic girls uploaded by their ex'es as revenge for cheating/being dumped/etc...
If I read this correctly, you could have women accusing their boyfriends of posting their private pics on a revenge porn site. As (in theory) apart from the creator, only one other person had them.
How many relationships did these two tits ruin I wonder? How many people did they cause emotional distress to? What a despicable act to be involved in. Cunts...
Yes, Jaguar made it clear they will be providing a Manual option:
If I ever have the money (I can dream), I'll get one of these. I love the look, and the main reason I never considered a XK8/R was the lack of manual option.
Well, seeing as what we call "nuclear waste" is in fact nuclear fuel that we couldn't burn in the reactor (hint, if it is radioactive, it still has energy in it) it is perfectly plausible that our nuclear waste to be usable in fuel in more advanced nuclear reactors.
The technology exists already, from what I remember, only the US (of nations that have the technological capability) does not reprocess spent fuel, but rather just dumps it into long term storage.
http://en.wikipedia.org/wiki/Breeder_reactor#Fuel_efficiency_and_Types_of_Nuclear_Waste has some info on reactors which can make use of nuclear waste to generate more heat.
You're welcome :)
Well, like how the internet was before, the phrase "seek and ye shall find" comes into play. If you dig hard enough, you will find all sorts of scum and villainy, there, or on another free network, or even on the internet.
Part of freedom is the fact that it is free for everyone, even those you disapprove/dislike.
On the other hand, you are in control, you don't have to interact with the wider I2P people for example, you can set up a few nodes, and exchange between friends, sort of like your own private bubble within I2P. Just like you can avoid/block certain IPs on the internet.
Also, as more and more people use it, the %age of the network that is nasty will go down. The "scum and villainy" tend to be early adopters of new secure tech, but they are a minority in the world after all.
Well, they are. The best I've seen so far is I2P. It is essentially an encrypted overlay over the internet (sort of like a global VPN) upon which you can implement servers. They already have servers for IM/file sharing and web. Unlike Tor it is fully encrypted throughout (with public keys), and is not connected to the rest of the internet, nor is it supposed to be (although I'm sure you can make a gateway to do so).
The wiki page explains it better than I can: https://en.wikipedia.org/wiki/I2P
Or look at their official page if you want to get involved: https://geti2p.net/
yeah "accidentally". Not like the sysadmin just won the jackpot and will not longer have to go back the next day to babysit old decrepit machines, eh? :-)
( I know which machine I would do it to in the DC, if I ever won a million )
.... would harp on about during my debates with them about the (imo) stupidity of filling cars chock full of electronics and wireless/keyless entry, is that yes, hackers can break into them, but how many petty criminals are also top-notch computer security guys?
I keep pointing out that while that is true, the petty crims don't need to be hackers. Just need one to work out how to break into a car, and then sell a little dongle (or I'd imagine in these modern times, a mobile app) that does the magic. Just like the problem with piracy, the moment somebody somewhere gets in, it can be distributed high and wide really fast and with little cost, and every dumb tit who can run an app can break into a car.
For a non-connected car, your petty crim has to have some knowledge of breaking in (That doesn't involve triggering the alarm by smashing the window). And their skill varies. Some are good, others just smash and grab. Either way, it usually involves them looking suspicious next to a car for about 5 minutes, and drawing attention to themselves. Far more suspicious than if somebody runs an app near the car, unlocks it, and then just walks in like they own it.
Apart from the population of el Reg, most people haven't quite cottoned on to the situation it seems. Maybe like a previous poster said, Insrance companies should hike premiums for these cars, in order to send a message.
So, 2000 GMT?
While I would imagine that the UK weather will do what it does best, at least thanks to the net we can now observe it somewhere where you can actually see without the clouds, smog and light pollution.
And that it was the Serbs, not the Bosnians, who shot it down:
Although I remember there being a rumour that a B2 was hit as well, but was able to fly back for repairs.
The F117 was shot down by a S-125 SAM, not exactly hot new technology.
Apparently US stealth planes were designed to be invisible in only a narrow band of radiation, most used by modern radar systems. Old WWII/Cold war radars used lower frequencies, which were less accurate, but could see the planes no problem.
<quote>they would gladly sell us air if they could work out a way of doing it.</quote>
Don't stop holding your breath just yet. Take a look at carbon credits. They don't sell you the air you breathe in, but they will charge you for the CO2 you expel out ;)
</joke, for the moment at least. Human expelling co2 is not part the carbon charge, but who knows what the future holds. They just need to get their foot in the door first...>
https://www.tribler.org/ seems to be heading in the direction you want. From the site:
"Tribler is the first client which continuously improves upon the aging BitTorrent protocol from 2001 and addresses its flaws. We expanded it with, amongst others, streaming from magnet links, keyword search for content, channels and reputation-management. All these features are implemented in a completely distributed manner, not relying on any centralized component. Still, Tribler manages to remain fully backwards compatible with BitTorrent."
So, like the P2P of old (Gnutella/eDonkey/etc...), but with bittorrent underneith. Apparently it also uses its own Tor-like service for anonymity.
What, did Bitcoin become too mainstream for them?
If you are going to go the route of currencies that are not state backed, why go with one that is run by a single company, and who can control how much currency is in circulation by twiddling some numbers?
Even if they wanted to limit the area to just Brixton, you could tag Bitcoins and only accept coins with the tag, or start their own block chain and create an alt-currency.
This setup just seems real fishy to me.
Perhaps they get somebody on the inside to plant backdoors? It could mean anything from breaking in to install custom hardware that gives them access (e.g. reverse SSH) , to bribing a janitor to plug in a wifi AP somewhere in the building.
Still, I'm not sure you can class anyone as an "IT specialist" if they get so badly owned that the miscreants can watch them on their own webcams. Sysadmins especially should be security conscious, and all the ones I know (myself included) have paranoid tendencies.
There was a guy who made a microcontroller based mifare read/writer, which could emulate the oyster cards used on London transport. In addition to reading and writing the contents on the oyster card, he could clone the cards of others then use those accounts for travel.
He never released the code and specs of what he did, but I remember there being a bit of a rukus a couple of years later about a large number of fraudulent oyster top-ups popping up, with TFL making changes to the system (Oyster is Mifare classic, from what I remember).
Presumably now they don't store the balance on the card, but some sort of ID which is linked to a central account. Still possible to clone them, but not to just issue "free" top ups.
It was deemed too expensive to rip out every single Oyster-enabled device and replace with a newer system, so I suspect that the above loophole is still viable, for those with the time and inclination for it.
Or perhaps, its purpose is to distract the world with a shiny, randomly moving object, while the real military satellite goes off somewhere else quietly?
Something that people have considered is the case for the USA's "secret" spaceplane as well. Real secrets are not so easily made public, discovered and tracked.