Re: Personally, I'd rather they fixed the fucking potholes.
Local government is responsible for fixing potholes, not OS. Different department and budget.
Potentially they can use this data to identify and prioritize fixing of potholes
2410 posts • joined 19 Sep 2007
"and matches it everywhere else" (if you ignore battery life, CPU and GPU performance that is). It was also roughly the same price as the iPhone 8 that it launched against.
The problem that Apple are hitting is that, from a hardware point of view, you don't need to spend much money these days to be good enough for most users. Ecosystem and software are separate questions.
@Miffo, you missed a couple of steps. They had a UNIVAC 1110, which initially was batch only (coding sheets sent in, punched cards and output sent back). They added interactive terminals. The poor UNIVAC struggled under the load. Sperry said they could upgrade it (for a fee), only to discover that someone had already changed the jumpers so the upgrade was in place. At that point they moved to a cluster of Prime minis.
@aqk, they are talking about 25ms pings and 100mbits/sec or faster. How much they charge for it will be the kicker, but that will depend on your local market as, the way I understand it, SpaceX will partner with local ISPs to provide the ground side of the equation.
@Lusty. It’s nothing to do with making the sensor smaller (smaller pieces of film/image sensors are always worse at imaging than larger, but they trade that against cost and size/weight of the camera), but rather the amount of magnification required to resolve even a 10 meter object from 400 miles away. You can’t get away with a teenie sensor and lens in this case, you need something like a telescope with a large objective lens to gather enough light.
Any optics that will fit on a mini-sat aren’t going to be able to resolve more detail than you can get from existing commercial birds, and they can’t resolve anything smaller than a few meters in size. Something the size of a person MIGHT show up as a single pixel. I doubt you have much to worry about.
Intel have a huge budget to work on improved Core and Xeon CPUs (don't forget, they share the same architecture, and business users would snap a significantly improved version up). If they could throw 100% more transistors at the design and make each core even 20% faster then they would do it in a heartbeat. They have AMD breathing down their necks, and AMD are getting close to the same performance per clock cycle. The limiting factor for single threaded performance is clock speed (diminishing returns have long since cut in over improving the design logic), and the max clock speed hasn't changed much over the last three or four process sizes. Yes, you can make the chips go faster, but at the cost of serious heat.
The reason that an i5 is the recommended CPU for gaming is that the single threaded performance doesn't get much faster the higher you go up the range. Most games offload a huge amount of the work to the GPU, but then modern GPUs are very highly parallel so it's not a like-for-like comparison. Game writers work within the limits of what is available, and what their target audience can afford to buy. Most games therefore include a quality configuration setting that adds or removes eye candy, trading against CPU/GPU performance. Try your i5 at 4K extreme settings.
@Dave 126 Desktop CPUs haven’t been getting much faster at single threaded tasks in recent years because Intel et al have run out of things they can do to make them faster. Clock speeds have hit limits imposed by the switching speed of silicon vs power consumption. All the optimisations they can think of are implemented (a modern CPU uses a huge number of transistors). Fabrication process improvements have slowed to a crawl. They’re left with stuffing more cores onto a chip and adding custom hardware for specific tasks like video encoding/decoding
This is related to the article in what way? Governments being slow to embrace new technology was what it was about, and you go off on a rant about them refusing to work with old technologies (probably because the old software you are using is known to be insecure).
Bottom line; it isn't clear how Morrisons could, within normal business constraints, have prevented this.
Other than by, for example, making it company policy that sensitive information should not be loaded onto a USB drive, and by applying technical controls to ensure that it didn't happen. If the external auditor truly needed access to this data then provide him with a remote desktop account so that he can see it but not download it.
What part of that was difficult?
It costs the wrong side of $1bn to launch and won't be remotely ready for a manned launch until 2022 at the earliest. Better to launch an unmanned Soyuz capsule to use as a replacement for the current crew (being unmanned there's less concern over another failure).
FPGAs are always slower and more expensive than the equivalent design in an ASIC. It’s just the nature of the beast (it needs to be larger than an ASIC as there’s lots of general purpose stuff in there that may or may not be used, and the distance between functional blocks is larger so signal propagation is slower). This includes putting CPUs in soft format on an FPGA.
If it is too expensive to create an ASIC, or the design is subject to change, and the task at hand is well suited to parallel execution in hardware, then this is the use case for an FPGA.
Both Altera/Intel and Xilinx are producing FPGAs with hard ARM CPUs these days (Cyclone V SE models from Altera and ZINQ models from Xilinx). These ARM cores can run much faster than soft cores while being easy to connect to the FPGA logic (AXI or AMBA protocols for example).
Intel are starting to add FPGA bocks to Xeon processors so that custom hardware acceleration can be added to traditional sequential programs.
It's a twin core, 32 bit micro-controller on a chip with built in RAM, flash ROM, WiFi and Bluetooth. It's cheap ($4 on a module or $2.80 for the chip alone) and can run the Alexa SDK. All Amazon need to do is produce their own native stack for it and the job's done.
To be fair, Intel's 10nm process is about the same size & density that other manufacturers are claiming for their 7nm processes. But yes, they have lost their technological lead in fab processes (there was a time that they were about 1.5 nodes ahead of everyone else).
@sabroni - The styling of a BMW doesn't physically change much between years either. The 8 has significant technical improvements over the 7 (faster CPU, faster LTE, bigger flash memory etc). The 7 remains in production, and has been bumped down in the range.
The X was never intended to replace the 7, it was a new high end model, just like BMW added the 9 series above the 7 series.
Sells better than the premium, much more expensive iPhone X (which is ONLY the third best selling phone in the world), and this proves that Apple have got things badly wrong?
BMW should stop selling the 7 series because they sell more 5s and 3s using that logic. This can only be an Orlowski rant given the level of thought involved.
(No, I don’t own an iPhone X BTW)
want to slag off a charity auction?
It’s not something I’d want to buy myself, but there are plenty of folks out there who are prepared to pay vast amounts of money for rare, obsolete technology (old cars for example). If it floats their boat and charity benefits then do we really want to call them names and look down on them?
How does anyone make a casino go bankrupt? They finance it with a bond offering an impossibly high interest rate, then threaten any financial analyst with legal action if they try to point out that it can't possibly make money while paying off debt at that rate.
BT have been milking the copper network for as long as they can. Offcom have been cutting back on the amount they can charge for it, and they have been allowing it to slowly rot. At this point they say "please sir, if we can charge more money we can roll out this shinny new fibre network (except for the VDSL bits of it we're not going to mention)"?
Which will, no doubt, be far faster in operation and quicker to reconfigure, having no mechanical parts to move about. MEMS type hardware is useful for some kinds of work (think display systems, transducers etc), but not so much when it comes to logic gates and pure electronics. You want as small and fast as possible for that most of the time.
1) Its a German company, launching from a Russian base.
2) The launcher is quite reliable
3) The programming of the 3rd stage has been dodgy in the past. Programming errors are much easier to fix than hardware issues, and 10 successful launches in a row indicate that they have them solved.
4) Its much cheaper than an Ariane launch, and the satellite doesn't need the capacity that Ariane offers.
The US government basically turned their public against supersonic flight by deliberately flying military aircraft at supersonic speed, multiple times per day, across high population areas to see if they would object.
At the kind of level that Concorde cruised (around 55,000 feet) the noise wasn't too bad. While it was subsonic and in/outbound from an airport the noise was much worse (the Olympus turbojet, especially running with reheat, was in no way designed to be quiet).
Tesla are saying that they can’t retrieve the data yet (probably because their cellular link to the car is down), not that the storage media have been destroyed. There looks to have been a serious amount of damage to the car (partial front impact, with the concrete pushing as far back as the passenger cabin).
As to the suggestion that the batteries be moved to the back of the car, other than it messing up the centre of gravity (the weight being low allows it to corner well) what happens if it is struck from behind? The damaged batteries didn’t explode like a petrol fire (nor should they) so it wasn’t a major issue.
The difference is that GPS/Galileo broadcast a time and position signature on a defined frequency. Pulsars MAY become useful for space navigation, but you'll never get centimetre level precision from them because the location of the source and the time offset are not as precisely known. If you have centimetre level precision (which GPS, even in Block III form - which currently isn't due to go live until around 2023 - can't match) then you can use the system then you can use it for precision airfield approaches or automated vehicles.
>you can only put just under two million records in excel so that's like 25 worksheets.
Erm, Microsoft make more than one program that can handle sets of data. Access for example lets you have databases up to 2GB in size, or SQL Server, depending on the version chosen, can handle terabytes at a time.
That’s just if you stick with MS. There are plenty of other database systems out there which can handle databases way larger than a puny 50 million records.
who, if they need a baby in a month's time, goes out and gets 9 women pregnant.
Not all problems respond to having more resources thrown at them. Those that do rarely scale in a linear fashion. Google used 89 instances to get the performance they achieved with TensorFlow. Even with perfect scaling you'd need another 4005 instances to match the IBM system. Starting to think about the cost yet?
POWER9 is a new platform. IBM will build based on orders. It seems that even Google have ordered them for their data centres, so its likely that you will be able to use them via the cloud also.
Not all programs are available as source code on Github. Many of those that aren’t are leaders in their field.
Searching Google works better if you use real names rather than contractions (“Snap Machine Learning” in this case). New stuff will return less entries than active old stuff.
IBM are saying that they have a new, as yet unreleased system for their Power minicomputers that is significantly faster than TensorFlow. It’s up to buyers to decide if they want to pay for the IBM solution, and accept the supplier lock-in that comes with it. In a commercial environment the speed is often worth it.
Biting the hand that feeds IT © 1998–2019