Re: For those interested in servery type projects
FreeBSD has ZFS
But the Pi simply hasn't got the memory for it to make sense. IME ZFS on FreeBSD is tempremental even in 4GB, it really wants 8GB+ even for small filesystems.
614 posts • joined 27 Dec 2008
FreeBSD has ZFS
But the Pi simply hasn't got the memory for it to make sense. IME ZFS on FreeBSD is tempremental even in 4GB, it really wants 8GB+ even for small filesystems.
Please don't send the audio from the browser to the phone for processing and verification. This just leaves a hole attackers can look to use. The server should receive both audio streams and check. Even if it is a simple hash of the stream to save bandwidth but don't leave it to the phone to confirm.
That strikes me as a very deliberate decision and one that I would agree with - done correctly (i.e. public key encryption that can only be decrypted by the phone) it means the service provider never has access to the audio. That gives the user a good assurance of confidentiality and eliminates the attraction of a single server being able to access everyone's audio. Of course, it does depend on the phone not being compromised but in that eventuality all bets are off anyway.
As for hashing, forget it straight away. This kind of DSP work always needs proper samples to work with, put simply too much processing is needed to match the samples up. The two recording are never going to be exactly synchronised for example, levels are going to need adjusting, and a certain amount of tolerance needs to be built in to allow for different locations or the characteristics of the microphones used.
The one potential showstopper I see is where security is actually a real concern where you may think something like this would be attractive. At my employer for example possession of a mobile phone on an operations floor is an instant sacking offence - they are that concerned about any recording devices, whether audio or visual.
Much as I'm a fan of Unix generally this is just bollocks. The chances are that if a server hasn't already been migrated to something else it is doing something rather more than file and print. There simply isn't a point-to-point feature equivalence for anything but the most basic of functionality, yet alone drop in replacement.
Two that come to mind right away - Ubuntu lacks AD integration in any reasonable meaning of the term beyond basic user management and of course it doesn't natively run Windows applications. Those are not niche features only of interest to tiny niche audiences but instant showstoppers for a large proportion of installations.
Shouldn't degaussing be sufficient? Or maybe that was too boring.
Probably. The problem with things like this is that there's generally too much "knowledge" around that is of purely historical value. A lot of the stories that get cited refer to e.g. floppies or low density hard drives - you can forget about them entirely for modern drives.
As the density goes up what it takes to make the data completely irretrievable goes through the floor: e.g. if you physically overwrite a sector once what was on it before is lost forever - those algorithms you have read about involving multiple passes and random data belong to a different age. Significant damage anywhere on a platter essentially makes the entirety unreadable - it doesn't matter if most of the data is still perfectly intact if there is no way it can be subsequently read out.
The fact some of the methods tested are not very exciting does not mean they are not completely effective. Hell, I wouldn't want to could on it but I'd imagine simply taking the top cover off outside of a clean room environment would counter even the most sophisticated attacks a good proportion of the time.
The Ken Thompson compiler hack ... means that the only code you can REALLY trust is that which you have compiled yourself, by hand, into assembly language, and then laid down byte-by-byte into memory.
It is altogether too easy to overestimate the impact of that particular demonstration: it wasn't really a practical hack or even a real proof of concept but more an illustration of a possibility.
Thompson's code worked against a specific login source tree and a specific compiler source. Generalising it to be resilient to continued development of either is hard and increases the scope for detection, after all if you want the hack to be cross architecture it needs to be inserted at the parse tree or possibly token stream level. Anyone working on those or later stages of the compiler would soon notice unexplainable entries in the internal data structures in their debugger.
That's without even considering the level of semantic analysis required to hack a tool that has not yet been written. That's decades ahead of the state of the art: we can say with confidence such technology simply doesn't exist.
Who cares? It works and behaves (to me, the user) pretty much exactly as it did years ago. So for all intents and purposes, my desktop has "not changed" in so far as it looks pretty much identical, acts identical (from the user perspective) and behaves the way I want.
Who cares? You do. It was you that advanced the proposed advantage that any old WM from twenty years ago works fine on current system. They don't. Now it supposedly doesn't matter because you have found a modern WM that happens to keep you happy.
That is a massive volte-face, not a justification of your position. If your preferred WM was twm, mwm, olwm or any of countless others you are shit out of luck.
Many apps simply don't work correctly on a strictly ICCCM-compliant WM.
Let's see… I'm typing this in Firefox 37 running within FVWM 2.6.5 on X.org server 1.16.4 and Linux kernel 4.0.2. I also use Gnumeric or LibreOffice for the office suite just fine and numerous other applications such as The Gimp, Inkscape, and of course gVim.
So, version 2.6.5. It adopted EWMH from version 2.5 onwards, and thus no longer pure ICCCM. Now ask yourself why they had to deviate from the prior and well established conventions.
Want an early 90s desktop with modern applications? No problems: install one of dozens of window managers, set up .xinitrc and it's just like the old days.
Have you actually tried it recently? A lot of classic Unix apps have been royally shat on by the Linux community who seemingly show blatant disregard for anyone using anything else. Many apps simply don't work correctly on a strictly ICCCM-compliant WM. Examples that come to mind - both Open- and LibreOffice will tend to dump core (and keep open documents locked) if you have the temerity to close the app via such a window manager. Firefox can't even maximise properly on some systems - it gets bigger all right, it just ends up four times the size of the screen.
Oh, and you still have most of KDE or GNOME running (and probably both), the needlessly chunky and redundant libraries anyway, if not the small veneer on top. Yes, I just love pissing away 200MB memory so that some anonymous dev can express his opinion on what an OK button should look like, even though it matches nothing else on the system.
That's without even getting me started on the desktop-oriented random distribution of detritus that such apps bring. Personally it appears to me that if I write a document on project Example then sticking it in ~/example would be eminently sensible. If I download e.g. a datasheet in support of it then equally I may want it in ~/example/data. It's good to know that I'm utterly mistaken in that and I should naturally follow the Windows 3.x practice where a document's location is based on where it came from rather than what it is - the "correct" locations are obviously ~/Documents and ~/Downloads respectively.
To drive the point home, bonus points for re-creating those directories each and every time the program runs even if nothing ever gets saved there. "Something shiny" that looks pretty is obviously much more important than the elimination of pointless distractions when actually trying to do some work...
Before reading it I had absolutely no idea what plain white flour, lard, sour cream, salt and baking power looked like. Now I know.
...co-opt users as a free workforce.
I wonder what completely pointless and utterly worthless metric they're going to come up with to persuade the
suckers users that their contributions are really highly valued?
Huawei seem to take that sort of issue much more seriously than their Western competition, possibly because of similar concerns in their home market. I bought a cheap Huawei phone a couple of months back and the privacy shrinkwrap was a breath of fresh air compared to what tends to get shoved down our throats these days.
The scariest terms were of the extent of "If you go to our website and buy something then we'll hold your card details for as long as needed to process your payment". Nothing about tracking what ever you do or finding new ways to profile and monetize you: terms that appear in the terms of seemingly everyone else these days - the whole thing was actually quite reassuring in comparison.
As for their commercial gear - well don't forget it been through and passed GCHQ code audits. Far from being untrustworthy they give every indication of doing this how you would want them to, and in a way that is far more respectful of your privacy than most companies.
When did anybody say that, ever?
Plenty of times - remember the quid pro quo - "things in the overall package that could be negociated and perhaps resulting in a lower salary." You may be willing to trade 2% of salary for another week's annual leave or even as in my case a few years ago 40% of salary for only working three days a week. When they asked why I explained I wanted to do my doctorate. They jumped at the chance.
Wikipedia sub-title sums it up: "Earth's internal heat and other small effects"
The figures don't really matter - this situation can be reasoned about without even needing any quantifiable data. The long term trend has to be either to a dynamic equilibrium or for net ice build up. We know this by the simple fact that the ice sheets are there and haven't already melted away and indeed have built up over time - they weren't always there after all. Therefore natural ice loss must on average be at least matched by new ice formation.
However we can see a long term trend to less ice so something has changed. The amount of geothermal heating certainly does but that is a slow steady decline, not an increase.
Unless of course you're talking about benefit fraud in which case current thinking is hanging's too good for them.
This time it isn't really the case of one rule for one group and another rule for others: the punishment for benefits cheats is similarly lenient. Time and again people receive tens of thousands as a result of a false claim and get punished with less than a hundred hours community service.
They have to repay the money of course but because in many cases the fraudster is legitimately on some form of benefit the courts won't order repayment at a rate of more than £5 a week. It's not unheard of that people would need to live to over 200 to repay the debt.
I'm impressed - From New Horizons' vantage point, Pluto would have appeared to be around 1/30th the diameter of our Moon (as seen from Earth) at the time that snap was taken.
I wouldn't get too carried away - it's still over twice the apparent size of Mars even at the most favourable opposition. To put it into context I've seen more impressive pictures of Mars taken by amateurs - advanced amateurs with perhaps £4000 of equipment, but still amateurs - even with all the atmospheric distortion involved in observation from Earth.
* 100Mbit and 600-800MHz 32bit in-order CPU (Crusoe, early Via Eden): barely usable. User experience is awful, you can see it redrawing, it stalls, etc.
I've used similar spec terminals and come to precisely the opposite conclusion - for most business apps - for most business apps you can't tell the difference if everything is properly configured and you are using high level protocols (X11, RDP) rather than the bitmap kludges such as VNC. Hell, I remember a few years back I tried full screen movie playback to see how well it fared, software rendering on the remote machine rather than locally. In fact it coped surprisingly well - no, I'm not going to claim it was silky smooth because the frame rate was well down and became distinctly cine-filmish in panning shots - but it was watchable.
The only issue I ever observed in real use was with Firefox on X11 and then only fairly rarely - it seems the rendering code does pass large images directly over to the X server for it to scale before display. If you have a particularly graphically intensive page or a single large bitmap things would slow to the a crawl - displaying e.g. an 80MP image was not fun. However, how frequent is that for most of the target market? The fact it isn't much good for viewing high res porn is probably a plus.
I can only assume that this is either completely groundless FUD or you are not capable of configuring this kind of thing correctly. Either way your comments are completely divorced from reality.
Right now the offset between BST and UTC is 3600 seconds. That offset should change to 3599 seconds, and UTC should continue unchanged. The TZDATA files will get bigger, but not much, since all countries should implement the leap second at the same time.
Careful, this is a simplification of a simplification. Many of these amateur suggestions are implicitly based on layers of simplification, knowing or unknowing, which introduce paradoxical effects at the margins. Doing the job properly is complex which is why you have a panel of experts spending so long debating how this should be handled - I wouldn't call myself qualified to add anything meaningful to their discussion but I've studied this enough to understand the complexities. In this case BST is not a 3600 second offset from UTC: is closer to 3601 seconds, and adding the leap second to UTC simply alters the offset to just under 3600 seconds. BST is a defined offset from GMT which the UK still uses in law at least. GMT is essentially an older version of UT1: it is defined astronomically and has no leap seconds.
Simply adjusting the definition of time zones like that has dramatic side effects - in particular events that happened close to midnight in the past can suddenly move from having occurred on one date to another simply by the introduction of another leap second. It doesn't take too much imagination to envisage all sorts of issues this can raise in the field of contract law alone.
The most elegant way of representing this I have seen on computer is simply to allow a second to be two seconds long during a leap second, i.e. allow space in whatever fine scale representation you use to accommodate e.g. 23:59:59 and 1500 milliseconds as the middle of the leap second as opposed to a 23:59:60.5 representation. Apps that don't care too much about precision time keeping simply get 60 second minutes and can ignore the whole issue, still being guaranteed that the date, hour, minute are always perfectly correct - the second too unless they try to get clever with internal representations. Apps that need ultimate precision can get the precise time any time they wish.
Yes, it's still a bit of a fudge but it avoids many of the issues of the simplified alternatives. It also closely approximates the current treatment with periodic insertion of leap seconds.
You print code out?
Of course, in much the same way that over a foot of space on my bookshelf is given over to hand-written notes on one of my pet hobby projects.
What are the most important development tools? Pencil, eraser, notebook and source listing. Fancy software tools can help massively for specific issues but ultimately nothing can replace time and effort spent understanding and thinking about the task in hand. Physically moving away from your computer instantly eliminates one of the principal distractions to doing that properly.
I agree, but only if there is a unescapable, untimately-terminal penalty for not forwarding the messages to the registered owner of the domain
But that doesn't really get you very far. If the site is engaged in commerce you still need real contact data in case of dispute. "These guys promise to send on the correspondence" doesn't look very good on a county court writ and ultimately the bailiffs can't do anything without knowing where the defendant is. You can claim simply "Get a court order and force them to release the data" but in most jurisdictions pursuing something other than money (such as records) is vastly more complex and expensive. And that is always assuming the proxy companies don't relocate to some noddy jurisdiction where such orders are difficult or impossible to get.
Privacy is generally something to defend, but if you are trading with the public and taking money from them you have to follow the rules of the game. Being accountable for what you sell is part of that.
Doing a 8-bit design would have been less than half the work.
That is something of a mixed bag. It reduces complexity in terms of components and wiring but significantly increases design effort. Large parts of a 16 bitter are simply the equivalent 8 bit circuit replicated but going in the other direction introduces some significant extra issues. Presumably you would want to address more than 256 bytes memory so that makes an address (at least) two words long. Similarly it's somewhere between difficult and impossible to encode a complete, useful instruction set in eight bits so you have multi-word instructions too. That gives you a large amount of hassle co-ordinating those half quantities and you need multiple cycles to send those values around. That in turn adds complications as you co-ordinate timing in multiple-cycle instructions.
I mentioned somewhere above that 12-bitter I set about designing a few years ago. 12 bits was chosen very deliberately as the simplest option - it's the narrowest width where you can sensibly have arithmetic, addresses and instructions all the same length. All instructions were single cycle so keeping everything in sync was also made a lot simpler, even if multiple cycles would have allowed you to crank up the clock rate a little. It did limit you to a 4096 word address space but I considered that adequate for a demonstration system.
To all the downvoters of my comment above, do you really need to be told that a transistor is a digital switch?
No, I don't need to be told that. You think that a transistor is a digital switch. I know that it isn't. You can wire them into an arrangement where their gain is high enough that it makes no difference and the resulting system behaviour is very digital, but that is in circuit. In isolation there are no ifs or buts, a transistor is an analog device.
I did look into doing something similar to this a few years ago and it got much further than a thumbnail sketch. The goals were slightly different - this was a 38.4kHz 12-bitter with a few niceties (e.g. hardware multiply and divide) and a few oddities (hardware assisted garbage collection). It was a lot simpler than this, estimated at 3,500 transistors and perhaps 3ftx2ftx18" in size, but no integrated circuits anywhere - not even memory. Most of that reduction in complexity was down to the use of threshold logic gates which are a slightly quirky semi-analog system - digital inputs, digital outputs, but internally the processing is very analog in nature which allows for a much richer set of functions than pure Boolean logic. This approach was common for research systems in the 60s to reduce the complexity of the systems by exploiting that very analog nature of transistors.
Utmost respect for the guy though because I know precisely what is involved. My project didn't get further than design, a few test assemblies, and a software emulator and assembler before the transistor I had based it around (BF199) went out of production. They were less than 3p each in quantity and when I saw the cheapest through hole alternative was £1.50 the entire thing went on the back burner.
remember the value of hiring decent developers in the future - you know, the sort that leave technical documentation and source code behind, instead of only binaries which, "just work".
That is more of a project management issue than a developmental one - what materials are supplied as a deliverable is outside the control of the typical developer. It is great simply demanding documentation and source but you need to take greater care specifying exactly what it is you want.
Personally I would rather not have a copy of the source code in favour of a complete dump of the source code management system for the project, even if it's Visual Sourcesafe or something equally oddball. That (indirectly) gives you the source of course, but also a lot of good documentation either express or implied, i.e. it's easy to identify the revision that introduced a given section of code. Hopefully you have a descriptive log message saying why it was written. You almost certainly do know who introduced it and when. That's a great index into the developer's rough working notes if you are fortunate enough to have copies of those.
In terms of documentation those working notes are valuable but you need things on top to pull everything together, ideally written in hindsight - thumbnail sketches written beforehand are of questionable usefulness.
For an illustration of that consider the work I am doing right now: we are about 18 man months into a development project and supposedly around half way through - although personally I guess we've only written about a third of the code but are two thirds of the way through the overall effort. The fundamental modules all exist in some form - they are either complete or are at least fully explored and exist in some rough state. However they are linked together with rough and ready ad hoc "scaffolding" simply to show each module is working in context. I estimate it'll take me (personally) ten days to replace that with a properly engineered replacement that will form the new innermost core of the application.
Now consider how that impacts on the documentation: put simply, any architectural overview already written will be completely invalidated once this next phase is complete. That kind of overview needs to be written once the code exists in tangible form. That brings me back to the first point, this is a project management rather than a development issue. Right at the end of the project, when the app is out of the door, time and resources need to be given over to creating that documentation. Put simply, many customers are not willing to pay for that even if it saves time and money in the long run.
However, even if you do have all that documentation that is not to say it will be any use. The specific issue here is mixed 16 and 32-bit code. Win16/32 thunks are not elegant and I can't see many devs using them on a whim. The most frequent use case is probably (almost certainly) a 16-bit library in use by a 32-bit application. Probably a third party library - the developer doesn't have the source code, yet alone the customer.
The South African Reserve Bank, like the Bank of England, is the nation's central bank. As such, is not a commercial bank and there is thus no way Shuttleworth could have held an account there.
But you can have an account with the Bank of England. Do not forget that until the postwar period it was a privately held bank and you could open an account with them as for anywhere else. Following nationalisation it became much more difficult to open an account but pre-existing accounts were left in operation. So you'd have to be quite elderly now to still have a personal account it is perfectly possible, and more common for companies. Younger entities may also have accounts opened on government direction, as happened for Huntingdon Life Sciences for example when RBS closed their account.
...and get a meaningless answer.
What counts as "disruption"? A member of staff being half an hour late in to work one day because of a road closure?
If the question actually asked is even vaguer, e.g. "affected" by natural disaster, then it swiftly goes up to 100%. Why? Well how many hard drives do you get through? The price you paid a few years back was affected by the flooding in Bangladesh that knocked out a lot of production.
Massive disruption a quarter of the way around the planet. Time to pull out the disaster recovery plan! And then... err... adjust a few rows on the projected expenditure spreadsheet. It's those kind of jobs where you really earn your money.
IOW the usual claptrap study commissioned by someone with something to sell.
Forget Hanlon's Razor, this is without a doubt corruption.
Not necessarily. The public sector tend be shit customers with a lot of hoops to jump through adding costs and hassle. They tend to prefer a small number of one-stop-shop suppliers which limits the size of their marketplace - they are not interested in how cheap your pens and pencils are unless you can also supply them with the copier paper, toner cartridges, desk tidies and that oddball size and style of clear plastic wallet that they use for that one job in particular. The likes of Viking can go after contracts like that - the small operation that undercuts everyone else by importing directly from Japan can't.
I've seen government procurement from both sides and the entire bureaucratic process produces perverse results - they are so obsessed with saving the taxpayer money that they end up spending far more than they would buying off the page. The drawn out nature of things doesn't help - 60 day terms still seem to the rule for example. Or a quote given at 2pm may result in an order at 3:30pm in the private sector, but six weeks later for HMG. They still expect that price to be valid so what do you do? You add a little to guard against fluctuations. Featherbedding the quote to reflect additional costs, hassle and risk is not corrupt, it's normal practice.
It's difficult to tell without seeing precisely what was taken down but it seems in egging things on that little bit further that the takedown does trip on that "swear by penalty of perjury" clause. It appears that he has been redestributing his own webpage after it has been redistributed (back to himself) with unauthorised modifications. However the takedown asserts that the "the entire web page detailed above is infringing on FN's proprietary software". That does not appear to be remotely true and the CEO signing the letter would have known that - none of this "to the best of my knowledge and belief", a.k.a "grep told me" business.
What are the chances of charges being pressed? Well he's a CEO so they are probably slim to none.
Seriously, is it envy? London is the third biggest city in Europe, so you can expect it to have a very high concentration of people...
No, it is not envy, you are trolling and your claims have no basis at all in reality. Plenty of people have given you anecdotal evidence contrary to your claims: they are demonstrably incorrect. If you want more concrete evidence it took, oh, seconds to find this story in the local rag. Note the installation of emergency generators. Oh, and if your geography is as bad as your knowledge of emergency power provision, Penwortham is in Lancashire, not London.
In actual fact, as I noted previously you are likely to receive a worse service in London simply because of that density you cite as an argument in your favour. The more properties affected the longer recovery target times and compensation thresholds get. This in London was a major outage - you could multiply the area by ten in rural areas and it would still be a minor outage subject to those more stringent targets.
I live in a village in the midland and can state they do do this. The difference is you may not notice, as we lost power lines. So they hooked them up after the break and power was restored. However unless you saw the bloody great containers in a field, you would be none the wiser and just presume the lines were fixed.
Of course they do - they have a legal responsibility to do so, it is just as you note the power density is most locations is much lower and doesn't have the same headline grabbing attention - there is nothing particularly dramatic about the appearance of a trailer in the local church car park. This is on an epic scale but that makes it less likely - the costs skyrocket and compensation clauses kick in much later when a large number of properties are affected.
My father used to work for a generator services company. They didn't get much of this work since they weren't - they were more sales and service with a sideline in advance hire for events and engineering work than emergency response - but they'd get occasional calls for this kind of stuff across northern England. Even in rural locations - even three or four farmers will play merry hell if they can't milk their cows. The OP is simply making up stuff on the spot - conspiracy theories only work if they are not completely divorced from reality.
Why are we sticking around with a patent-encumbered format that only supports 256 colors?
The patents expired over ten years ago. Given the age of the format it is one of the few animation standards you can be clear is not encumbered by patents.
The directors take great pains to remove their liability for misdeeds. They act in broad: one step removed. Pinning the blame on them would be like trying to pin the tail on a runaway donkey.
There are established mechanisms to deal with that: for example you can use the Health and Safety at Work Act as a template in that regard. Directors are personally (and criminally) liable for any breaches of the act within their company. They can't wriggle out of it and transfer the responsibility on to someone else, in fact attempting to do so is itself evidence of guilt.
However, it doesn't matter what they do, in any company of a few thousand people there is always going to be plenty of stuff going on that the directors are completely unaware of: if two junior staff decide by themselves to develop a "more efficient" way of work that is unsafe management do not necessarily hear about it until it is too late. Their only effective defence in such a case is to point to procedures they have in place: for example that safe working practices have been determined and staff have been trained in their use, that relevant equipment is provided and in appropriate condition, that regular health and safety audits are carried out, and there is a well defined whistle-blowing mechanism to raise issues that still crop up. If you can show all this the courts take a reasonable view - you did everything practical to ensure the workplace was safe but shit happens, therefore no guilt attaches to you as a result of this accident.
There is no reason in principle data protection could not be similar. I'm not entirely convinced about criminal liability - calling for that to me always sounds like vindictiveness after the event, and putting too much control at the very top is also putting that control into the hands of non-specialists - but I'll leave that to one side. I think (hope) this is the point the ICO are trying to make - the fact there is a breach should not necessarily lead to a sanction. If there was gross negligence and sloppy practices then sure, fine them and fine heavily. If on the other hand you can point to solid procedures in place to protect data and that they are subject to regular review to keep them current, but still have a breach falling in to the "shit happens" category, perhaps that should be viewed as an opportunity for review as to how defences can be improved in future.
Not sure how many customers are willing to go for that schema. I can imagine only a few uses (one-off data migrations or conversions, test runs and what else?) where this makes sense, but perhaps I'm not experienced enough.
I can see plenty, so many in fact that there's a special term for it. Not some newfangled buzzword from a marketroid but a real term that actually means something - good old fashioned batch processing.
I'll admit that I'm struggling to see applications here that would fit well but that is more a reflection of our infrastructure than the merits of the offering - anything here is either not substantial enough to justify configuring a VM for or the amount of data that needs uploading is out of proportion to the CPU time requirement. However, we do most of our processing in house, very little cloud which is essentially directly customer-facing stuff. If that wasn't the case and our data was already cloud based then something like this would be very attractive for many tasks.
Consider BT are installing fibre in a duct and need X fibres to meet current and imminent demand. A beancounter looks at the job and decides that X+Y fibres should be installed, weighing the cost of installing that spare capacity and the cost of capital used against the potential profit raised by selling services using that fibre and the probability of that fibre actually being needed. If you are selling fully managed service the potential profit is nice and juicy, providing an incentive to make Y reasonably large in the first instance.
You then change the equation to make those potential future profits smaller - instead of fat profits on service you get much smaller amounts renting out bare fibre. The return on investment calculation shifts in favour of not installing so much spare capacity to begin with. In the long term that doesn't benefit anybody.
I'm not a massive fan of BT but equally I don't have a massive axe to grind about them - I'm simply looking at this from an economic standpoint. BT are a telecommunications company that makes money from selling service. They are not in the business of renting out bare fibre for chump change.
I mean, come on, who'd give their child a christian name that no-one can spell correctly, never mind Starbucks personnel.
I totted up all the Claires, Clares and (occasionally) Clairs I knew a few years back and arrived at a figure of 38 different women. Allowing for a few missed off and more I've become acquainted with since it's probably in excess of 50 by now. I don't have a hope in hell of remembering which spelling it is for which. I dated one of them for six years, lived with her for four, still don't know what her name was.
Of course if you decided to make yourself redundant then you'd be obliged to hold a consultation with yourself. Which would be somewhat awkward I'd imagine.
This may be tongue in cheek but it is the legal reality. If you ever set up a limited company you have to have an AGM every year with all the directors there and all shareholders invited. For a single person company that gives you a roll call of one. However the AGM still has to be held and minutes still have to be taken. Certain business operations can only be done following a vote at the AGM.
OTOH it does provide an excuse to spend a fortune on crap snack food. You justify it to the missus with your director's hat on, in that you have to provide nice nibbles for the shareholders at the AGM.
Bloody hell! 76 years with no downtime - that's amazing!
It's a continuously operating radar site which is distinct and more easily achieved than maintaining operations of any equipment on that site.
Late last year I replaced the battery in the UPS powering this workstation and I started wondering based on that. I'm not going to go through all the figures but to get to 10kWh using those cells would cost ~$2750 even assuming no bulk discount, and top name (Yuasa) batteries from a proper distributor. They'd have a five year life and 200-300 charge cycles so replace them twice as often and double the comparative price to $5500. Add the circutry needed to generate a mains approximation and we're somewhere in the same ballpark as that Tesla unit. You probably could get cheaper using a more sensibly sized battery to begin with but I'm not going to start optimising there, we'll simply leave this unit and lead acid batteries as being roughly comparable on the metrics used so far.
OTOH lead acid cells have quite incredibe power density - that single cell in my UPS can deliver 300W. A bank of 120 to store 10kWh would be capable of delivering 36kW on the same basis. Sure, you'd drain it in minutes at that load but if you need the 8kW shower, the 3kW kettle and 2kW oven all on at the same time for a short period the ability is there. And no, all those batteries wouldn't be absolutely huge, again in rough terms roughly a yard square and seven or eight inches deep - call it the size of a radiator.
This battery seems more of a desperate attempt to find new markets for Tesla's battery tech to cross-subsidise the cars. Sure, it's a nicely packaged solution but it can't cope with real requirements. Proven 1850's tech can for around the same money.
How can you sum up the government's acheivements without mentioning her? Getting an expert of such world renown to head up a stunning success like the Year of Code was a stroke of genius.
..not the El Reg reporting of this, but the claims made in the first instance.
So you add this lens to a cheap camera phone and it functions identically to a $20,000 microscope. Adding a single lens does not add:
Even the test chosen seems hand-picked to sound impressive while being nothing of the sort. Viewing prepared slides at ~100x is among the easiest jobs you can ask a microscope to do - you don't have the depth of field issues affecting the stereos at even lower magnifications and you don't need the same correction for aberations as at higher power on slides. Picking this one test is akin to that scene in Top Gear - "At 40MPH this £7,000 city car easily overtakes that £250,000 supercar travelling at 30MPH" - the test is so far removed from the advanced capabilities you are paying for as to make the comparison meaningless.
Somewhere in here there may be some small development of merit, but so many layers of bullshit have been piled on top of it - whether by the original authors or the university press department - that you ultimately end up throwing out the whole lot as nonsense.
Downvoted for incorrect use of 'acronym' - that's an initialism.
You don't know until you've heard it. ECSS -> "Ex" is far less contrived than the likes of SCSI -> "Scuzzy".
However, what "non-IT" people don't realise is that just holding it ON the RFID reader does nothing. It has to be moving in order to induce the current to power the radio circuit inside it.
Err, no, in the same way that you don't need to be juggling your laptop charger in order for the transformer to work. The magnetic field is constantly varying anyway, you dont need to add movement on top of that which is generally too slow to generate meaningful power at any rate.
Lord knows, none of you twerps bashing these researchers - or, for that matter, Richard - would want to do any of what we in the computer science world call "research". If you had, you might have discovered that "probabilistic programming" is a term of art that's been in widespread CS use for at least a decade.
A simple search of the CACM archives or ACM Digital Library would have told you that. All you self-professed experts are ACM members, right? You might want to skim, say, Gordon et al, "Probabilistic Programming", Proceedings FOSE 2014.
Did you even read the comments before slagging them off? If you had you would have seen my point:
I recall skip lists for example were described as probabilistic when first presented
So that'll be CACM back in 1990 then. Your point is what, exactly?
Information is not like other commodities in that you can transfer it while still retaining it yourself, i.e. if I tell you my name I myself still know what my name is. For data that is available at no cost (i.e. it requires no effort to collect or collate) then a traditional model argues that its value is little more than the cost of physically transferring it.
However, that ignores the value of exclusivity - not the value of me having that data but the value to me (even if only perceived) of you not having it. I may have good reasons for wanting you not to know something even in a commerical context. I might look up a 1970s sexploitation film on Amazon one evening but not want similar titles recommended to me the next day in the office. I may not want constant bombardment from hundreds of salesmen each time I am the market to buy some high-profit item. I may not want to harm my negotiating position when purchasing said item because the seller knows I need it immediately. I may oppose the information transfer for the very reason the business wants it: I do not want to be manipulated for that organisation's advantage and my own cost.
These are rational arguments against giving information to all and sundry and show the assymetry in the transfer. The organisation is gaining information which is all this article considers, but on the other hand I am not losing information but privacy. It doesn't really matter want the real cost of that turns out to be: the mere perception of value translates directly to real value because I will rationally refuse to sell something for less than I think it worth.
There's still a lot to be said for a dumb TV and an external media centre PC. If you've got something capable of running Linux (Raspberry Pi, perhaps) then it should have quite a long life, and if it runs out of processing grunt, you can keep the TV and upgrade the media centre hardware.
Yes, there's a lot to be said for it but equally it isn't a universal solution. TV + media player isn't that bad, but it gets out of proportion when true dumb panels get recommended and you end up with a separate panel, TV tuner, amplifier, media player etc with half a dozen plugs, half a dozen things to turn on and off, and half a dozen remotes.
A lot of people want a simple, integrated device that does the lot. Even if you are willing to put up with that rigamarole for the main set in the living room it may be completely impractical for the secondary sets in the kitchen or bedroom. Smart TV are not a bad idea in themselves, the problem is lack of ongoing support. Since many of them run a general purpose OS anyway (e.g. but not exclusively Linux) allowing the user community in would allow the software to be kept up to date.
Oh wait, that might deter the three year replacement cycle the TV manufacturers seem hell bent on getting us all to adopt...
Are we talking wooden frames with metal inserts for traditional cage nuts, or pure timber?
Commercial rack strip, e.g. like this.
I would have thought that the weight of a piece of equipment or laden shelf which only attaches at the front would cause undue stress on the frame.
It's easy to underestimate the strength of timber based on a simple understanding that metal is stronger. It's less clear when you consider a piece of 3x2 and a 1" steel box section simply because of that much greater cross section. Those racks would probably struggle if you filled an entire rack with UPSes and their batteries but that goes for the commercial steel units too. We certainly didn't have any problems in practice.
Whack a roof on the top and a door at each end and you have a rudimentary room into which cold air is introduced. The hot output from the backs of the cabinets then goes into a separate area and can be pulled out by the extractor.
This always sounds like a retro-fit to me and it is unconsciously premised on commercial, off-the-shelf racks. It's the obvious solution until you see somewhere that does it differently. My last-but one employer used wooden racks custom built and fitted for the specific room by a local joiners. The racks did run from floor to ceiling, overhead ducts for power, air and data built in overhead (both along and between racks) and doors on each end of the cold aisles were an integral part of the racks rather than tacked on as an afterthought.
When you first went in there your initial thought was "This has been done on the cheap", which I suppose is inevitable when the primary construction materials was 3x2 and unused bays screened off with hardboard quarter-rack blanking panels. After a while though you learned to love them - they did the job, cables were easy to route and the hot aisles were very open permitting easy access to all the connections on the rear. Oh, and you had rear rack strip at both 600mm and 1000mm positions - why can't more racks be like that?
Depending on you defintions that was either a large server room or small data centre, 40 racks and two cold aisles. The cost of all that joinery was significantly less than 40 off-the-shelf racks despite all the additional infrastructure as part of the package. From "cheap" your attitude shifted to "Why doesn't everyone do this?" and I became convinced that in that area at least following the herd is not the best idea.
a) shouldn't the action be against the reviewer?
Not under British libel law where anyone in the chain relaying the message can be pursued. If there is a libellous comment about you printed in a newspaper you can theoretically go after the newspaper, the printers, the distributors, even individual newsagents.
Of course you can't. In my case you'd need at least an A3 screen to draw them actual size, let alone exaggerate them.