It's a little larger than the size of an A2 sheet (four A4 sheets put together)
1430 posts • joined 10 Apr 2007
32" Wacom Cintiq... £3,500, and that lacks the hinge of the Surface Studio. The hinge is no gimmick: it lets you do "drawing work" and "desktop work" with the same display - I've never seen a Cintiq setup that didn't have another, also very expensive, monitor sitting on the desk too, so in many cases the Studio would actually be a cost-saving option (!)
Like El Reg, I would really like MS to have offered a Video-in as an option, but the way modern computers are designed, I suspect that there's some place in the board design where a clever hacker could add a "bypass" video input once the motherboard is no longer viable..
The real computing load in media production is in final rendering, compositing, grading and editing. Actually drawing or designing the parts for a feature doesn't need very much computing power. In a large creative house (animation studio, ad agency, etc), these will be on the desks of the artists, but the render farm will be in a server room, and the grading/editing suites will still use "regular" high-performance workstation PCs...which will have monitors that cost far more than £3000 attached to them.
The economics of media production are about getting the maximum productivity out of each person - an extra two grand on tools for every artist's desk is not a big expense within a ten million dollar production (and ten million dollars puts you into the "small, independent" category when it comes to films these days) if it makes those artists even 5% more productive.
WSL started life as a way of getting Windows Phone to be able to run Android apps - it was developed alongside a code-porting toolset for iOS to address the other side of the mobile app market. The iOS tools were released, but the Android project never saw light of day, probably because while the iOS work would still be viable for Windows PCs, the Android one was only ever going to be used on the mobile devices that Satya was getting ready to kill off.
RIM/QNX did a similarly clever job to get Android apps to run on Blackberry devices, but QNX had the advantage of being a "Unix" kernel, unlike NT which stems from the VMS world. Now, that said, the NT kernel has always been able to accommodate different userspaces, but mapping Linux syscalls to equivalent functions inside NT must have been a big ball of fun...
"Passion" means lots of things. It's not so long ago that it still meant "uncontrollable fury", which I honestly would pay a small ticket price to witness at an Apple Store.
But the original meaning of the word "Passion" was as a contrast to "action". It meant to be uncontrollably subjected to something unpleasant by an external power*. Actually, now I say that...
* this older meaning is why the "Passion of Jesus Christ" is so called in Christian theology.
I listen to a lot of podcasts... or rather, I listen to the first two minutes or so of a lot of podcasts (I wrote a podcast app), and it amazes me what people can stretch out to an hour of dull conversation. And there's definitely a textbook out there in the deep pit of hell where marketing people are trained that says "to promote your idiotic product, do a podcast", because there's a hell of a lot of those "this is my BUY MY BOOK podcast about BUY MY BOOK and I'm joined by BUY MY BOOK..."
My personal favourite is the misplaced optimism of the "only one episode ever made" podcast: "Hey, welcome to the VERY FIRST super-duper-cast. Every week we'll be talking about stuff and it'll be AWE-SOME..."
My general rule is that anything that's been broadcast on radio somewhere is good for a listen: editors are truly the most underrated, and most necessary, people in the media.
Okay, I'm not much of an NFL fan but from an English publication, that is somewhat... rich. While many dollars are showered on the sport, the wages for players are strictly controlled (the NFL operates like a Soviet-style centrally-planned economy), and fall short of what you'd expect - especially when you look down the team roster.
But, starting at the top: the highest-paid player in the NFL is Minnesota Vikings quarterback Kirk Cousins at $22 million a year. Meanwhile, the highest-paid player in the English Premier League is Manchester United striker Alexis Sanchez at £25 million. Pounds. Or $32 million in greenbacks.
However, the gap widens dramatically once you get away from the big names. One thing about NFL teams is that because there's a controlled salary budget, the stars tend to get huge money, while the utility players are paid a tiny fraction of that: the median salary of a quarterback is about $15 million, but of an NFL full-back it's only $620,000 (okay "only", but the point is that there's a 25x difference). By contrast, no English Premier League player is paid less than £700,000 (about $1 million) a year, and that's at a team where the top-earning player is paid maybe £3 million a year.
Also, the yearly draft, limited squad sizes and the lack of other places to play makes the position of an NFL player a lot more precarious. While there's always the lower leagues for a player dropped from an EPL team (and Championship teams can and do pay in the million-pounds-per-year range), the dropped NFL player has very few professional options open to him.
I upvoted that, but in truth Sculley saved the company. Getting rid of Steve Jobs and his insistance on commercially-suicidal practices (custom Apple-logo rubber feet for under the case?) was the masterstroke.
Unfortunately, like most reformers, he then stayed too long, and allowed a proliferation of wild blue-sky fantasies (Pink/Taligent, anyone?) to squander Apple's cash and its technological leadership - this was the late 1980s, when Apple genuinely was a cutting-edge technology company.
Exactly. And Coca-Cola has provided its logistics network in the past to aid agencies to distribute medicines and supplies to remote parts of Africa.
Also, note how in disaster operations in the USA, the large breweries and soft-drink makers shift to canning fresh water instead?
But some people can't see that corporations are neither a good nor an evil to society, I guess...
You know that picture of Scrooge McDuck diving into a swimming-pool filled with gold coins? That's the US Healthcare market. Everyone's on the make, knowing that the insurers will pick up the cost (and pass it on, plus margin, to the consumer). Spend a week in a US hospital, come away with a five to six figure bill. Spending is out of control, with hospitals splurging on tech to try woo patients, who aren't paying the full costs anyway.
In a way, it's like the mobile phone business a decade ago: lots of customers, presented with a situation where it looked like they were not paying the actual cost for something, so plumping for the most outrageously expensive option every time.
Sure, there's no real opportunity outside the USA, but with the big wads of cash up for grabs, who cares about the rest of the world..
Apple wants to get itself into health, because it's one of the few last places where there's a huge customer base and price does not matter. That's the kind of market Apple needs. And I suppose if health doesn't pan out, they could start making baby clothes...
If you think taking physical media to people's houses is an acceptable form of backup, you've been very, very lucky to date.
I don't run my own generator, because even though my job requires electricity, it's much cheaper to pay someone else for the electricity they generate in bulk.
Similarly, I could find someone to "mind" a disk for me, but how is that really better than putting it in a remote data center that has a full-time staff that will look after it.
Guess what: I didn't write my own compiler either. I probably could, but it was actually better to pay a company that had made really good ones instead so that I could use theirs, and they'll fix any problems with it, and I can get on with doing my work...
Seriously, when did taking control of your own shit become out of fashion?
Well that's the same kind of question. Personally, I let the city's sewage system take care of my shit. What do you do with yours? Kilner jars? (Do not try to tell me you made your own septic tank...)
The issue with the 68000 was not that it lacked an MMU, but that it couldn't reliably restore the CPU state after a bus error (i.e., a bad memory access), so even if the system bus-decode logic implemented hardware memory protection, the CPU couldn't do the recovery from such an error. This was actually a bug in the CPU design, which was fixed in the 68010 in 1982. When Amiga was being finalised, the 68010 was readily available, and was at more or less the same price: 68010 was a very small revision to the 68000, and was pin-compatible and code-compatible except for one single instruction (MOVE SR,<...>) that was made unavailable to User-mode code.
The reason why the Amiga didn't use 68010 is clearer when you look at the development of the computer: Amiga was developed as a games console*, with Atari as the intended customer. Because of this, there was no OS developed in tandem with the project - and even if there had been, it would have been single-tasking. When Atari collapsed in 1984, and the Amiga contract passed from Atari to Commodore, the priority changed: consoles were believed to be dead and buried, and Commodore wanted something to compete with the Apple Macintosh. Commodore's internal effort at a 68000-based launcher, filesystem and CLI ("a DOS", basically) ran behind schedule, so they licensed the British Tripos product and ported it to the simple kernel they'd already got working ("Exec"). Tripos is where 'df0:' and its friends come from in AmigaDOS.
Neither Exec nor the Tripos kernel were Unix-based. VMS and CP/M seem to be major influences on Tripos, and Commodore's Exec was a tiny, clean-sheet scheduler and interrupt handler - and a somewhat odd one, as it ran in 68000's user, not supervisor, mode.
Anyway, there's an interesting comparison of AmigaDOS and Tripos here, plus the manuals for both OSes: https://www.pagetable.com/?p=193
Well, both iOS and Windows Phone do/did what you want: the UI refresh and event collection processes are run at higher priority than the rest of the system. Windows Phone's API also encouraged the use of asynchronous programming, which allowed the system to be exceptionally responsive even on low-end hardware. Plus, like AmigaOS, both of these are platforms where you're not running more than a handful of user applications at once. However, both iOS and Windows Phone exposed an opposite problem: the UI refresh and animations can keep working even when the app itself has stopped listening (I wish I could say that this phenomenon is only seen by developers while debugging their code...)
"Windows" and "Linux" really aren't single things. The responsiveness of your application on these systems depends largely on what framework or API you're using: UWP on Windows encourages developers to write asynchronous code that responds quickly to user interaction so you get the same fluidity as its late Phone cousin, but WPF and Win32 do not. On Linux, of the toolkits I've used in anger, Qt apps respond better than GTK+ ones, and Linux has the added overhead of using X11 messaging for events.
MacOS is an odd case - it keeps some of iOS's smoothness, but also some of iOS's annoying feeling of powerlessness: cocoa apps alway feel like you're using safety scissors. It seems responsive, in that it doesn't stutter, but there's a very, very slight delay that provokes the feeling that the controls you're using are somehow not directly attached to the application functions - it's hard to describe, but to me the Carbon framework always felt "snappier".
In any case, my memories of Amiga differ to yours: on the A500 I owned, I found it sluggish with frequent stutters when running more than one application. I tried to keep those situations to a bare minimum, as Commodore's decision to try do pre-emptive multitasking on a CPU without hardware memory management made working on Amiga an exercise in saving your work very, very often. For games, though, it was incredible.
It would be a great disappointment to you...
At its absolute fastest, The 68000 operates at around 1 instruction per four clock cycles - that's the minimum time to transfer any data from RAM to CPU in a 68000 system*. So, just shy of 2 million instructions per second, flat out (i.e., no memory access except instruction-fetch), although about half of instructions require an additional bus cycle (=4 clock cycles) to fetch an operand or store a result, so let's say 1.5 million operations per second as a theoretical maximum for useful code. Let's double that to account for Blitter access (although in reality, the Amiga's odd RAM architecture meant you could never hit that maximum - you'd always need to shunt stuff from "regular" RAM into the "chip" RAM that the co-processors and video system could access)
(Incidentally, if you're interested in where the Amiga's Copper co-processor sprang from, you should read up on the Atari 800's video hardware: they had the same designer, and the same concepts are in there too.)
The latest Core i7 CPU, clocked at 4-ish GHz, is 500 times faster than that 68000 (ignore for now that that the actual average speed of the part is lower than 4 GHz), so multiplying up our 68000 efficiency, we'd expect 750 million instructions per second, or maybe 1.0~1.5 billion operations if we count the Amiga's co-processors no?
In fact, the Core i7 8700k runs at about 35 billion instructions per second: nearly ten instructions per clock-cycle, compared to the 68000's peak 0.25 instructions per clock cycle. That's thanks to using multiple cores, multi-level caches, and about four decades worth of pipeline optimisation that often results in instructions executing in "zero" cycles (i.e., the pipeline has the result ready before the instruction is due to be executed).
On the other hand, the 68000's simplicity meant that a single, reasonably competent engineer could design a full bus-decoder and glue logic package for a 68000 in a couple of weeks, and the lack of caches and deep pipelines made the 68000 a very predictable part in terms of timing: essential for real-time applications - this is why the family had such a long second career as an embedded controller (as the Motorola/Freescale/NXP 683xx).
* This relatively long bus cycle is what allowed systems like Atari ST to run video out of the RAM with no bus penalty: if you're careful to isolate the 68000's address signals from the bus when they're not needed by your RAM, you'll free up enough bus time to slip in a second RAM fetch in each 4-cycle bus period without interfering with the 68000's access.
The mood in China has shifted from conspicuous consumption of luxury imports towards promoting the best of domestic product instead. Where once, flashing the new iPhone on launch day was seen as a sign of a successful and glamorous lifestyle, it's now frowned upon: why, goes the official line, is this person supporting an American company when there are Chinese businesses making mobile phones?
This phenomenon has also hurt car makers: Volkswagen, whose Audi brand was seen as the go-to show of wealth in the Middle Kingdom, has seen a massive drop in sales as customers shift to more socially-acceptable products instead. (nothing to do with diesel trickery: only petrol models were sold in China)
China's wealthy, like the wealthy everywhere, are acutely aware of which side their bread is buttered on (to use a culinarily-inappropriate metaphor...), so when the Central Committee suggests something, the wealthy and the upper middle-class tend to embrace the idea quickly and enthusiastically..
The current warmer of the desk chair in the Oval Office hasn't helped by using tariffs, but that only accelerated an existing process.
The GSM specification allowed for SIM1 and SIM2 on a single account (original use-case was to allow a carphone and a mobile phone - this was the days before Bluetooth). Only Vodafone ever implemented it, and even then they never really publicised it.
Easy to see why, really, when they can sell you the same call-and-data allowance twice under the current scheme...
It's idiot programmers who are responsible for prepending BOMs onto JSON payloads, not the framework. By default, the .Net Utf8Encoding class does not emit a BOM - you've got to explicitly request that behaviour. It's been that way since the start (.Net Version 1.1 doc here: https://docs.microsoft.com/en-us/dotnet/api/system.text.utf8encoding?view=netframework-1.1)
But the JSON parsers are, strictly speaking, broken too. The JSON specification does not define a byte encoding (no, read it again: "a sequence of Unicode code points". Code points are not byte values), so parsers that accept only untagged UTF-8 are in error.
"Be liberal in what you accept, and conservative in what you produce" is always the best approach.
I don't think you should confuse Samsung's fairly shitty approach to OS updates with the hardware which is pretty well-built and lasts well. Several people here have posted that their S* are still running fine, as is my Wave (hardly ever start it but it still works).
I find that when it comes to Samsung, there's two camps: the first are happy with their products, and the second are the ones who used to be in the first camp, right up until they tried to get an old one fixed.
And this isn't just phones: I've seen it with TVs, cameras, printers, washing machines, fridges. This is a company that just does not care about after-sales service. When the last pallet of products leaves the factory, it's as if the product never existed.
And, OS updates are really important for a mobile phone: not keeping the phone's OS up to date is ultimately the same as shipping an underspecced battery: your average customer tries to install whatever app they've just seen an ad for on TV onto their three year old phone, gets a message about an unsupported OS version (too old), goes to the phone shop, and ends up having to buy a new phone, simply because Samsung couldn't be arsed to run some automated tests. It doesn't matter that the hardware could keep running forever if you've got to root the phone to keep the OS up to date - 99.999% of customers have no idea what that even means, let alone how they'd go about doing it.
Apple is evil, but at least it keeps a roof over its serfs' heads.
I think the idea of Android phones degrading sharply over time has a lot to do with two things: The first is that the pre-eminent manufacturer of Android phones is Samsung - a company whose attitude to long-term support would make Don Juan blush.
The second factor is that Android phones have always been marketed using their spec-sheets, rather than user's actual needs. This has resulted in manufacturers cutting back on reliability in order to hit a higher performance value at the same cost. You can run cheap components at the top of their performance envelope and get the same performance as more expensive ones, but you will not get the longevity.
I'm interested to see how the company now using Nokia's brand-name is doing on durability. Longevity and toughness was always a strong point of the original Nokia (because in its home market of Finland, there were never phone subsidies: customers bought a handset at full price, so needed to get at least five years use out of it).
I don't suffer from migraines, except those times I had to handle banknotes in Switzerland...
The internet tells me that I'm referring to the pre-2005 Series 8 notes, but photos don't really do them justice: why waste 10chf on LSD when you could just stare at the money itself...
Anyone bigging up Ada could do well to read: ...
That article perpetuates two myths: the first myth is that strong mathematical ability is necessary for someone to be a programmer (i.e., Ada wasn't a good mathematician, therefore Ada could not have been a programmer), and the second that Ada Lovelace's place in computing history was earned for being "the first programmer"*. Both are mostly untrue.
Lovelace's insight was much deeper that just writing out code, and was probably one that only a "failed undergraduate mathematician" with a penchant for the arts would have had - the other people connected with Babbage's machine seemed to be too enamoured with numerical methods to have made the leap outside of Mathematics that Lovelace did**. Basically, she is the first person to have stated the idea that the mechanical and mathematical processes, as used by the Babbage machine, might have application to things that were at the time considered to be the domain of Art, not of mathematics.
Here's the gist of it: If you assign numeric values to things that are not in themselves quantities, they could be manipulated by a machine like Babbage's to produce results that, once decoded from that numeric form back into their real-world equivalents, would have meaning. This is such a fundamental idea in computing that most programmers barely even think about it (and one major operating system family, Unix, conflates the concepts of text and binary data to such an extent that developers often make errors in handling them that only become clear when they have to translate their applications into a different human language)
* she was definitely the first writer of developer documentation, though; the "programs" published under her name are much clearer than the equivalents in Babbage's own notes, and the written correspondence (and occasional arguments) between Babbage and Lovelace would be familiar to anyone who's ever had to write sample code for a new system.
** Even a leap to another area of Mathematics would have been world-changing. George Boole, of "Boolean" fame, narrowly missed collaborating with Babbage - they didn't follow up on what seems to have been a very cordial meeting at the Great Exhibition of 1862, and Boole died (young) just two years later. Babbage adapting his Engine to use Boole's two-state arithmetic rather than decimal calculation remains one of those great "what if's" of history - there's no doubt that it would certainly have simplified the design enough for it to become feasible with the technology of the age. As it was, Boole's ideas on logical computing didn't get realised until 1937..
As the lower-denomination notes are the most seen, surely placement there is the real honour?
Three of the eight Deutschmark notes featured scientists (DM 10: Gauss; DM 200: the Nobel laureate Paul Erlich, DM 500: the entomologist Maria Sybilla Merian), a fourth celebrated an architect (50: Balthasar Neumann). Three writers, a poet and one musician rounded out the set.
... Yes, that comes to nine people, because (and in case you still labour under the misapprehension that Germans have no sense of humour) the DM 1000 note featured the brothers Wilhelm and Jacob Grimm, famed purveyors of fairytales
The most surprising thing about this DM 1000,- note was that it actually saw real circulation: I received one in a pay-packet once in the mid 1990s, and I was able to spend it in a shop - okay, that took a bit of checking with the manager first, but for what was effectively a £350 note, it was relatively easy to use.
... Mr Dick revisited this a few times. I came here to provide a link to a different, more optimistic, take on the same subject matter:
Like much of Dick's early work, this is now out of copyright, so you can read it here: http://www.gutenberg.org/ebooks/28767
Non-residents often have need of US bank accounts, given how expensive, unreliable and cumbersome it is to transfer small sums in and out of the US banking system (and by "small", I mean under $10,000).
And while the US does indeed continue to tax its citzens who no longer live in the USA, non-citizens are fine: if your home country has a tax treaty with the USA, you simply sign the declaration that you are not a US citizen, and IRS won't touch your account. They will, however, gladly report it to your home country's tax authorities if asked.
There's nothing very onerous in place to prevent a non-resident opening a US account, but-- and this is where nashi druzya hit their personal roadblock-- you do need to provide a fair amount of documentation to prove your identity. Honest actors will already have such ID, of course. The difficulties with this system occur when you want to open an account without revealing who you really are... which is fair enough, as that's precisely why the checks are there in the first place.
Andrew is onto something here by identifying freedom from the annoyance of ones iPhone as the best feature of an Apple Watch, but let's come at that from another angle:
How about taking all that "the essential features of your smartphone" call and notification functionality, and put it into a form-factor like the old Nokia 8800 Sirocco, with that kind of "jewellery" build-quality. (I had an 8800, and as a physical object, it's still by far the nicest phone I've ever owned), so it can be carried in a jacket pocket, or a small purse or clutch.
Add a WiFi hotspot to the LTE modem, as there'd be enough internal volume to allow a decent battery and properly-spaced antennae, and it allows either a small tablet or laptop to take over the "detailed browsing and apps" role of the phone.
So, now I've got something that I can use to receive notifications and phone calls on (with the bonus of not looking like I'm starring in some some sort of low-budget Dick Tracey fan-flic whenever I take a call) - plus I can use it to give my laptop or a tablet 4G connectivity wherever I am*. Plus it won't look out of place in the few times I've got to dress up.
Oh, it also seems that I've made the actual "smartphone" redundant in the chain. I could live with that too.
I know there are solutions out there (the Punkt phone reviewed a day after this), but I'm comfortable enough in myself to admit that I'm sufficiently vain to want something that is beautiful, and not just functional.
(* I'm aware that you can get LTE-equipped laptops and tablets, but as network operators have chosen to not implement the "multiple SIMs, one account" feature of GSM, having another device means a new account, and basically paying twice for the same low data usage)
Jobs never used the disabled parking spots at the Apple campus.
...they were too far from the main entrance, so the Beloved Leader parked blocking the "Fire Department Only" area instead, as that left just a short, straight walk into the atrium of IL1. Upside: the car was visible, so you'd know to be on the lookout; downside: if there'd been a fire, you'd have had a greater chance of dying.
I believe it was Pixar's disabled parking spots which were regularly abused, and this is referenced in a scene in "Toy Story 2" where the villain is seen driving across the road from his home to his store, and parking diagonally across the only two disabled parking bays...
There is a reason it was green or amber on black, and not vice versa.
Yes, to prolong the life of the CRTs and reduce power consumption, but mainly to preserve the life of the tubes.
Secondarily to that, an full screen of "inverse" video on a terminal of that era will clearly show the scanlines that make up the display, which is visually distracting.
Whether text is more legible as black-on-white, or white-on-black depends mainly on how thick the strokes are: lighter typefaces work best as white-on-black; bolder ones work best as black-on white. As an example of this, the road signage used in the UK uses two weights of the same typeface (imaginatively called "Transport Medium" and "Transport Heavy"): the "Heavy" weight is used for black lettering on white-backed signage, and the "Medium" weight is used for white lettering on the green and brown-backed signs. Visually, it's hard to notice that there are two different weights used, which is precisely the point.
I'll happily sing the praises of Qt for as long as you'll let me, but the existence of Qt doesn't do anything for the vast majority of iOS app developers whose codebase was created in XCode and so contains a mix of Objective-C, C++ and Swift, all running an Apple UIKit user interface that is largely set out in absolute "pixel" distances for a portrait-oriented screen (I blame the use of graphic tools like Sketch for this, as they guide developers towards directly copying dimensions out of a drawing to get a "pixel perfect" copy of the designer's UI concept, rather than thinking about how the UI is logically laid out).
Qt apps tend to work better because simply by choosing to use Qt in the first place, you've demonstrated an understanding that the world of application development is not limited to one maker of phones. Also, as the quickest develop/test cycle with Qt is to build your mobile app as a desktop Qt app (rather than deploying it to a device simulator or a real phone), you quickly get a feel for what would need work if you were targeting both desktop and mobile users.
This is not going to work at all unless Apple introduces some kind of Mac with touch support. Even then, I suspect it'll be like running iPhone apps on an iPad - functional, but not up to the aesthetic standards that mac users pretend are higher than other people's..
Anyone who's ever written a mobile app can tell you that operating a touch UI using a mouse or trackpad, as you do in the device simulator, is a horrible user experience - it's not the scrolling, but the layout of items that's a problem. Also, the "one fixed-size screen that's completely filled" nature of mobile apps allows designers much more leeway than a windowed environment does: there's precious little visual consistency between iOS apps , and that's something that will make a Mac desktop look just as messy as the bad old days of Windows 95, where every application producer insisted on using a custom window design because they could.
Apple surely must have been looking at what has been happening with Windows 10*, and how hard it has been for developers to design touch+desktop applications that work adequately for both platforms - and that's on an OS with a sizeable number of touch-screen devices already out there. The problem isn't a technical one, but rather a higher workload being placed on designers. iOS really doesn't ask much more of a designer than "draw nice screens in Sketch". Simply introducing resizeable windows destroys almost all of the design assumptions underpinning iOS apps, so dealing with such flexible layouts means raising the bar considerably. Then there's the problems of different viewing distances to consider: a layout that's clear and legible on a phone has far too much whitespace when viewed on a screen.. this is the persistent complaint against Windows "Metro" apps, but it's not caused by the framework, but by the designer simply settling to use a compromise layout and sizes that look equally not-good at both viewing distances, rather than designing a loose (tv, mobile) and tight (desktop monitor) one, and having the app dynamically choose at runtime.
I suspect the goal will be to allow direct code porting between iOS and macOS, rather than the more ambitious approach of a single executable that adapts its UI to suit the host's capabilities, as Microsoft took with UWP. While I think UWP's is the right approach, I also accept that few professional developers are given the planning time needed to do things right, as the mobile-app dev cycle is easily summed up as "screw the architecture, we need something running tomorrow".
(* ironically, the only significant UI change in this release, the light/dark UI switch with accent colour choices, is cribbed from Windows 10.)
To add more details: They are indeed 4-core parts in the Surface Laptop 2.
i5 (Consumer) = Intel Core i5 Mobile 8250U 3.4 GHz, 4-core
i5 (Business) = Intel Core i5 Mobile 8350U, 3.6 GHz, 4-core
i7 = Core i7 Mobile 8650U 4.2 GHz 4-core
That i7 is the exact same part used in last year's Surface Book 2, which is probably strong sign that the Surface Book 2 itself will see a refresh soon.
Better yet, I'd hoped they would have followed the links on Microsoft's own online hardware store, and found the info. To save you the click, here it is:
Surface Pro 6 from £879 (128G i5 Platinum only; the black option only joins the range the £1149 256G SSD model)
Surface Laptop 2 from £979 (128G i5 "Platinum" colour only; again, other colours only available from 256G, which is £1249)
Delivery on both is from 16 October onwards, which is the same date as for US customers. I guess that Microsoft is very slowly learning that a product launch should be global. (Australians are also offered the same 16. October delivery date, incidentally)
The Surface Laptop 2 uses the latest, 8th generation, CPUs. No detail yet as to the part numbers, the specs page says only "Intel® Core™ 8th Gen i5 or i7". My reading of the Register article's "the same CPUs" comment suggests that the Laptop is now upgraded to 4-core CPU; the previous model used dual-core parts. I'm sceptical, but it would be a pretty big upgrade in performance if true.
It's still USB Type A because the laptop is still running a USB 3.0 controller - there's no reason to use a Type C connector unless you offer USB 3.1, and implementing USB 3.1, particularly the charging function, would need far more changes to the laptop's internals than are justified for a mid-cycle refresh (which is what this is).
same general memory and processor options,
Memory, yes, but not processor. The "New Surface Pro (5)" model used 7th-generation CPUs, while the "Surface Pro 6" uses 8th generation parts. That's a significant improvement in power consumption .. or performance, depending on the system designer's priorities.
Battery life is quoted the same, 13.5h, but the "battery life" test that everyone uses really only measures display and video decoding, so performance on more representative workloads could be very different
How is it underhanded?
Those updates are not tested against your newer CPU, because that product is not supported on your CPU. Why would any company ship a release that they knew with 100% certainty had never been formally tested on a configuration. (Usual suspects, please resist the urge to look like a fucking idiot...)
Most likely the updates will work, but maybe they'll screw your system performance too. Remember, most of the fancy low-power and clock-speed regulation features in the latest CPUs are completely invisible to Windows 7's kernel: you're basically running with whatever Intel/AMD sets those registers to at reset time.
I understand why someone would prefer Linux to Windows 10, but then Linux is a modern, supported OS. Windows 7 isn't anymore. It's time to move on.
The # symbol is inconsistently present on Apple's UK keyboard layouts across all products. There wasn't any particular date at which it arrived, or left, as the company regularly had some products with it, and some without, with no logic to which is which. The only vague pattern is that "Pro" in the name tends to correlates with "#" on the key-cap, but it's not a guarantee.
My own recent experience: Pro 15" (2015) - yes, USB Keyboard (2010) - no. Pro (2010) - yes. Pro 13.3 (2012) - no.
Consistency. It's what they're known for.
(Placement of # , ~, " and ` is the one thing that still catches me when I move from Mac to Linux/Windows. Objectively, the "PC" key placements are more programmer-friendly, but I have decades of muscle-memory telling me to type Option-3...)
This was part of Gordon Welchman's work that revolutionised military signals intelligence: Previously, everyone concentrated on the message content, but Welchman is credited with being the first to realise that what is said in a message is only part of the information that can be extracted from it. The identity of the sender and recipient impart information, as does the schedule of transmissions, the transmission power, and the attitude of the sender: a good Morse listener can't just recognise a "fist", they can also hear how relaxed or stressed that sender is, just a you can tell how relaxed someone is by their voice.
So, if you recognise an operator in Hamburg's keying style and know he sends lazily-keyed confirmations to messages about fuel supplies every morning, usually to other operators in the Baltic, but suddenly he's responding to stations in Dover and Rotterdam as well and with more urgency than his usual lazy fist, then it suggests that there's something happening along the Channel that might be worth sending a spy-plane to look at. And you discovered this without having to decrypt a single message.
Also recommend - good read on the subject. Hugh's brother Simon Sebag Montefiori, is also a historian.
The Sebag Montefiori family (no hypens, old bean) was the previous owner of Bletchley Park, selling it to a property developer in 1938. Admiral Hugh Sinclair of the Secret Intelligence Service was so convinced that it would be a good location for a codebreaking centre in the event of war that he bought the house for £6,000* from that developer out of his own money, after being unable to convince HM Government of its value.
(* £6000 in 1938 pounds is about £250,000 in today's money, but the better comparison is that the average house cost around £500-600)
Yes, they do.
Jerzy Różycki, Henryk Zygalski, and Marian Rejewski are the men in question. All three ended up working for the Allies after the fall of Poland, although none worked on Enigma.
It was Zygalski came up with the idea of using punched sheets to quickly reduce the possible key-space. Because with some (about 12%) initial rotor setups, enciphering the same letter at a known distance could produce the same ciphertext on both occasions, and because, prior to 1940, the plaintext messages tended to begin with a repeated preamble identifying the sender, it was possible to construct a set of "can never occur" sequences for each possible start position, which allowed the majority of possible keys to be discarded quickly by visual inspection. The remaining possibles would then be fed into the Bombe for brute-force decryption.
The use of the term "female" for these same-letter matches in ciphertext comes from Zygalski's original Polish term, but in general the origin of the techniques used were kept from the operatives at Bletchley, largely for security reasons (the name "Zygalski" certainly would not have been common in 1940s Britain, and would have stuck in the mind of workers at Bletchley; if overheard later by enemy spies, it could have tipped off the German intelligence services that the Allies were a. actively attacking Enigma, and b. starting from the attack previously used by the Polish)
Zygalski and Rejewski eventually enlisted in the Free Polish Army (based in UK), but were not given access to the Bletchley group, and were instead set to work on far simpler cipher traffic. Rozycki had died in 1942 when the ship carrying him from France was sunk, presumably as part of a German operation, as the passenger-list included other ex-members of the Polish Cipher Bureau.
Despite the three men never working there, there's a plaque to their memory at Bletchley, which clearly acknowledges the debt owed to the Polish services in cracking Enigma.
(All three men have reasonable summary articles on Wikipedia, and searching the names will find you a number of other articles on the subject of Polish Enigma-breaking)
Another W10M hold-out here. I'm genuinely at a loss as to what to buy when it finally dies: friends' Android feels like a really shoddy OS compared to W10M and using iOS is basically like signing up for Scientology. A voice-phone with built-in 4G dongle and a small laptop looks the most likely thing.
But, about apps...
WP8.1 to WP10 actually is a recompile (with a bit of search/replace), unless you'd used Silverlight (WP7) to write the app, in which case it's a lot of search/replace.
The problem is that just doing a port of a WP8.1 app to Windows 10 doesn't allow you to use the big feature of UWP: a single executable that works (not just "runs") on desktop, tablet and mobile.
Mobile-phone apps are easy to design because the size of the display doesn't change dynamically. Desktop apps aren't because it does: grab that window-sizer, and see the UI designer flinch. Add in the different interface choices to accommodate both mouse and touch input, and it becomes a significant amount of work to make something that will work properly on different devices and isn't crap.
To get that working, you have to redesign your UI pretty much from scratch to make fewer assumptions about the screen aspect, resolution, viewing distance and the input methods used.
It's a lot of work, and Microsoft's lack of a mobile platform makes it not a lot of gain for that work. (Not many apps make sense on both XBox and PCs).
"2. Bash is too primitive for Windows admins" would be closer to the truth.
I've spent almost all of my career on non-Windows systems (MacOS, embedded Unix, Linux, etc) but I had to do process automation stuff in Windows in one job, and after that, I can say that PowerShell is, without doubt, the best shell programming system I've ever used for actually getting shit done quickly (and being able to understand it again six months later).
At least 20% of any Bash script seems to be spent parsing out the results of other commands using a witches' brew of sed, awk and grep, then squirting that into other commands either as arguments or input. This is lot of effort with near zero value-add, and it's famously error-prone too (and there's also the security implications of using commands like 'eval' on what is basically free-form text*).
PowerShell does away with pretty much all of that: when you put a PowerShell command into a pipe, the output isn't text, but rather structured data (imagine that it's JSON, even though it's not). You can select or format or re-package the data any way you want, and pass it on along the pipe. If, like me, you've spent any amount of time writing shell scripts, PowerShell is like stepping into the sunlight.
( * no, I woudn't either. But I've had to take it out of a lot of scripts written by people who did... )
I feel compelled to point out that, like pretty much all of the "sanitised" proposals, those terms also do not describe a Master/Slave relationship in the meaning it has in computing... unless you really expect the Pimp to directly attend to the carnal needs of clients, and then replay them later, blow for blow (as it were) on the unfortunate Ho.
"Actually, the RGB triplet is inherently prejudiced: it codes black using the lowest value possible, while white is coded as the highest possible - a blatant example of the white-centered hegemony (that allowed someone of my limited thinking skills to acquire an the expensive college education I'm now wasting on such trivialities)
Of course, as befits an activist of my calibre, my indignation on this issue isn't sufficiently-well informed to have looked at, say, the CMYK colourspace" < / sarc>
If you're translating from French (or German, Italian or Spanish), do yourself a favour and use DeepL... If DeepL doesn't have your language, use Bing Translator in preference to Google. Google is a really dumb pattern matching engine, whereas those two use more sophisticated linguistic analysis (DeepL is in a class above Bing, though)
Google Translate was cool a decade ago, but is a very poor tool by today's standards. Sadly, its ubiquity makes people think it's in some way representative of modern machine translation.. an error this paper's authors have helped to perpetuate.
It's worth noting that Peter Skillman was only appointed to take over Skype design duties a mere five months ago (according to his LinkedIn bio). I'm a big admirer of his previous work both at Palm (webOS) and Nokia (N9 Harmattan, Asha Fastlane), so I'm hoping that he can do something to simplify the way Skype works, which is the problem it still has, rather than simply simplifying the way it looks, which is how Microsoft got into this mess.
Sorry for pedantry, but "Götterdämmerung" is not the death of the gods, but rather their twilight. The difference is important, because the words "twilight" and "Dämmerung" mean both the dim light after sunset and the dim light before sunrise. That double-meaning produces what is an unusually subtle title for Wagner.
Tolkien was often asked about Wagner's operas in his lifetime, and despite his famous answer that "Both rings were round, and there the resemblance ceases", the idea of the ring itself being the instrument of power, and of it corrupting the soul of its wearer are both points that are hard to find evidence for in the mythology until after Wagner's operas. Maybe this reluctance to acknowledge the influence could be because, by the time LoTR was published, the use of Wagner's work by the Nazis would have been fresh in the memory of potential readers.
That happens when you let Germans design a plug
SCART = "Syndicat des Constructeurs d'Appareils Radiorécepteurs et Téléviseurs", so I think you might be pissing on the wrong bank of the Rhine, there.
For a standard developed in 1970, it proved to be extremely far-sighted: particularly in standardising a method for direct RGB input to TVs, something which only became useful in the late 1980s for home computers, and really only hit the broader consumer market in the late 1990s when Digital TV and DVD became popular.
Short-selling is not dubious, it's not bad, it's not "evil". I'd expect a whole lot better from technical people than just spouting someone else's ideologies without investigating the reasons why the phenomenon exists.
Consider a stock market as a control system for pricing (Economists, forgive the errors and simplifications). The "proper" price of a stock is what people think it's worth (not the intrinsic value of the company, and certainly not what the company owner thinks its worth) . That "proper" price is the set-point of your control system. What's listed on the exchange is the output of the system (the offer-price). The error signal is the difference between the "proper" price and the offer price. The feedback mechanism is the difference between purchase prices (accepted bid-prices) and the offer-price.
Now, if you've done any control systems theory, you'll see there's a problem with that outline: it has unequal resistance. Sellers rarely accept bids below the current listed value, so you can only buy at the price offered, or wait until it falls naturally. However, if you want to buy at higher than the offer price (i.e., inject positive feedback into the system) you can do so immediately. This resistance to negative movement, if unchecked, produces a market where stocks become overvalued, and then collapse catastrophically once the offer-price drifts so far away from the "true" value that buyers stop buying.
In a phase of price growth, short-selling is a stronger downward adjuster of stock prices than merely not buying. As an example, an investor holds 100 shares of Stock T, but believes that it's over-valued, but also wishes to retain, or later increase, their shareholding in T (i.e., they have confidence in the company, but concerns that its stock price is ballooning). Simply selling the stock that they believe to be overpriced at market value actually injects positive feedback into the system - someone buys it, and the price finds a new, higher, perceived floor. Selling the stock at below market value doesn't have effect as long as other sellers keep selling at the highest price they can get (the rational behaviour).
The answer to this problem is to short-sell. Sell half the stock, but sign a contract to buy it back again at whatever market price is a month from now. If the price isn't lower by then, you've lost. If it is, you gain. Either way, you, as the investor, have provided a strong feedback signal to the market that the current price of the stock is above its "proper" value, and done so in a way that doesn't force you into irrational behaviour (selling at lower than the best price available)
For companies, like Tesla, that are sold on the promises of the company itself, short-selling is the only safe corrective mechanism available to stop the price creating a bubble. Earnings reports won't depress prices the way they do with established companies, because Tesla is sold on the basis that it's not going to make money for a long time. Without short-selling, the only corrective measure is a cessation of buying, which causes a dramatic price crash, just as an overrunning control signal eventually hits a safety limit and shuts down the process under control.
If someone is childish enough to think that a company's stock price is a measure of personal approval for, and judgement of, its CEO, then that person is certainly going to be upset with short-sellers.
The story says "Government Issued ID", but neglects to mention that this definition includes a Drivers License ("S"-spelling because that's the name of the document, don't bug me)
Anyone who's spent time in the USA will know how often people are asked to show this document, and it's commonplace for background-check services to request (and receive) scans of it. So, there's the first hurdle down: I've got someone's ID photo. Now If I project that image onto a balloon, and let the app's phone camera grab it, can I vote for them? I bet I can. Hell, why don't I just run the app under a rooted phone with debug enabled, and simply bypass or fake the validation step. Expensive, yes, but a State Governorship is worth more than a few million dollars, if campaign budgets are any judge.
The way around this nonsense is simple: Vote in person, make a mark in a box printed on a sheet of paper using a pencil. By all means use computers to assist in the counting of those ballots (and in the first-past-the-post electoral system used across the USA, it is trivial to count ballots optically), but the initial input needs to be handmade.
All mechanical and digital systems for recording votes are vulnerable to undetectable ballot-stuffing. X-in-a-box systems can at least be subjected to graphological analysis if there's suspicion, but a hole in a card, or a row in a database has no trait that can reveal that it was created in bulk by a single person.
Biting the hand that feeds IT © 1998–2019