Agreed. You have more than enough horsepower in a Raspberry Pi to handle emulation of the VCS (in fact pretty much any games console up to the PlayStation 2 can be handled by a Pi 3 B+)
2419 posts • joined 19 Sep 2007
work quite happily with a 32 bit or smaller MCU, with an FPU if they're lucky, why would you need 400+GFLOPS for one? Especially why would you use a system without a predictable cycle time? There's a good reason that MCUs are slower than conventional CPUs, because that way you can guarantee how long each chunk of code takes to run.
Over 300 people dead? Two accidents over a period of a few months in a new type shortly after it was introduced? Yes it calls for suspension of flying until they can identify the cause. What world do you live in where you can say "we don't know why these aircraft crashed, lets keep risking passengers while we find out"
Is because they've changed the position and size of the engine nacelles and pylons in order to fit larger engines. This tended to cause the aircraft to nose up in some circumstances, which could cause a stall. MCAS is there to trim nose down if it detects that, but in a fit of genius Boeing saw fit to have only two AoA sensors, not three. If one is faulty how can the software detect which one?
I'd be surprised if it was. They tend to load everything into containers and/or pallets so the aircraft can be rapidly loaded and unloaded, while allowing the cargo to be easily locked down while in flight. You'd have to have multiple failures in fastenings, and this towards the end of the flight.
Looking at the flight track, they were flying at about 6000 feet above a wildlife sanctuary. Possible bird strike? Two engines out at that height wouldn't give them a lot to play with.
"Tere's simply no rational reasons to run ARM servers."
Other than the facts you can get a given amount of CPU horsepower for less money and using less power?
For many workloads those are compelling reasons. That's how x86 got its foot through the door. It wasn't very good, but it was cheap and fast enough.
You could easily get bit-mapped monochrome graphics for the PC (go look up the Hercules Graphics Card), the problem was that the 8088 @4.77MHz sucked compared to a 7.8MHz 68000.
The fact that the 8088 was a 16 bit CPU with a 8 bit data bus didn't help, but the 64K segmented memory size and a limited set of registers (many instructions were restricted as to which registers you could use, which made things worse) were also killers. Windows didn't really become usable until the 386 arrived.
"and matches it everywhere else" (if you ignore battery life, CPU and GPU performance that is). It was also roughly the same price as the iPhone 8 that it launched against.
The problem that Apple are hitting is that, from a hardware point of view, you don't need to spend much money these days to be good enough for most users. Ecosystem and software are separate questions.
@Miffo, you missed a couple of steps. They had a UNIVAC 1110, which initially was batch only (coding sheets sent in, punched cards and output sent back). They added interactive terminals. The poor UNIVAC struggled under the load. Sperry said they could upgrade it (for a fee), only to discover that someone had already changed the jumpers so the upgrade was in place. At that point they moved to a cluster of Prime minis.
@aqk, they are talking about 25ms pings and 100mbits/sec or faster. How much they charge for it will be the kicker, but that will depend on your local market as, the way I understand it, SpaceX will partner with local ISPs to provide the ground side of the equation.
@Lusty. It’s nothing to do with making the sensor smaller (smaller pieces of film/image sensors are always worse at imaging than larger, but they trade that against cost and size/weight of the camera), but rather the amount of magnification required to resolve even a 10 meter object from 400 miles away. You can’t get away with a teenie sensor and lens in this case, you need something like a telescope with a large objective lens to gather enough light.
Any optics that will fit on a mini-sat aren’t going to be able to resolve more detail than you can get from existing commercial birds, and they can’t resolve anything smaller than a few meters in size. Something the size of a person MIGHT show up as a single pixel. I doubt you have much to worry about.
Intel have a huge budget to work on improved Core and Xeon CPUs (don't forget, they share the same architecture, and business users would snap a significantly improved version up). If they could throw 100% more transistors at the design and make each core even 20% faster then they would do it in a heartbeat. They have AMD breathing down their necks, and AMD are getting close to the same performance per clock cycle. The limiting factor for single threaded performance is clock speed (diminishing returns have long since cut in over improving the design logic), and the max clock speed hasn't changed much over the last three or four process sizes. Yes, you can make the chips go faster, but at the cost of serious heat.
The reason that an i5 is the recommended CPU for gaming is that the single threaded performance doesn't get much faster the higher you go up the range. Most games offload a huge amount of the work to the GPU, but then modern GPUs are very highly parallel so it's not a like-for-like comparison. Game writers work within the limits of what is available, and what their target audience can afford to buy. Most games therefore include a quality configuration setting that adds or removes eye candy, trading against CPU/GPU performance. Try your i5 at 4K extreme settings.
@Dave 126 Desktop CPUs haven’t been getting much faster at single threaded tasks in recent years because Intel et al have run out of things they can do to make them faster. Clock speeds have hit limits imposed by the switching speed of silicon vs power consumption. All the optimisations they can think of are implemented (a modern CPU uses a huge number of transistors). Fabrication process improvements have slowed to a crawl. They’re left with stuffing more cores onto a chip and adding custom hardware for specific tasks like video encoding/decoding
This is related to the article in what way? Governments being slow to embrace new technology was what it was about, and you go off on a rant about them refusing to work with old technologies (probably because the old software you are using is known to be insecure).
Bottom line; it isn't clear how Morrisons could, within normal business constraints, have prevented this.
Other than by, for example, making it company policy that sensitive information should not be loaded onto a USB drive, and by applying technical controls to ensure that it didn't happen. If the external auditor truly needed access to this data then provide him with a remote desktop account so that he can see it but not download it.
What part of that was difficult?
It costs the wrong side of $1bn to launch and won't be remotely ready for a manned launch until 2022 at the earliest. Better to launch an unmanned Soyuz capsule to use as a replacement for the current crew (being unmanned there's less concern over another failure).
FPGAs are always slower and more expensive than the equivalent design in an ASIC. It’s just the nature of the beast (it needs to be larger than an ASIC as there’s lots of general purpose stuff in there that may or may not be used, and the distance between functional blocks is larger so signal propagation is slower). This includes putting CPUs in soft format on an FPGA.
If it is too expensive to create an ASIC, or the design is subject to change, and the task at hand is well suited to parallel execution in hardware, then this is the use case for an FPGA.
Both Altera/Intel and Xilinx are producing FPGAs with hard ARM CPUs these days (Cyclone V SE models from Altera and ZINQ models from Xilinx). These ARM cores can run much faster than soft cores while being easy to connect to the FPGA logic (AXI or AMBA protocols for example).
Intel are starting to add FPGA bocks to Xeon processors so that custom hardware acceleration can be added to traditional sequential programs.
It's a twin core, 32 bit micro-controller on a chip with built in RAM, flash ROM, WiFi and Bluetooth. It's cheap ($4 on a module or $2.80 for the chip alone) and can run the Alexa SDK. All Amazon need to do is produce their own native stack for it and the job's done.
To be fair, Intel's 10nm process is about the same size & density that other manufacturers are claiming for their 7nm processes. But yes, they have lost their technological lead in fab processes (there was a time that they were about 1.5 nodes ahead of everyone else).
@sabroni - The styling of a BMW doesn't physically change much between years either. The 8 has significant technical improvements over the 7 (faster CPU, faster LTE, bigger flash memory etc). The 7 remains in production, and has been bumped down in the range.
The X was never intended to replace the 7, it was a new high end model, just like BMW added the 9 series above the 7 series.
Sells better than the premium, much more expensive iPhone X (which is ONLY the third best selling phone in the world), and this proves that Apple have got things badly wrong?
BMW should stop selling the 7 series because they sell more 5s and 3s using that logic. This can only be an Orlowski rant given the level of thought involved.
(No, I don’t own an iPhone X BTW)
want to slag off a charity auction?
It’s not something I’d want to buy myself, but there are plenty of folks out there who are prepared to pay vast amounts of money for rare, obsolete technology (old cars for example). If it floats their boat and charity benefits then do we really want to call them names and look down on them?
How does anyone make a casino go bankrupt? They finance it with a bond offering an impossibly high interest rate, then threaten any financial analyst with legal action if they try to point out that it can't possibly make money while paying off debt at that rate.
BT have been milking the copper network for as long as they can. Offcom have been cutting back on the amount they can charge for it, and they have been allowing it to slowly rot. At this point they say "please sir, if we can charge more money we can roll out this shinny new fibre network (except for the VDSL bits of it we're not going to mention)"?
Which will, no doubt, be far faster in operation and quicker to reconfigure, having no mechanical parts to move about. MEMS type hardware is useful for some kinds of work (think display systems, transducers etc), but not so much when it comes to logic gates and pure electronics. You want as small and fast as possible for that most of the time.
1) Its a German company, launching from a Russian base.
2) The launcher is quite reliable
3) The programming of the 3rd stage has been dodgy in the past. Programming errors are much easier to fix than hardware issues, and 10 successful launches in a row indicate that they have them solved.
4) Its much cheaper than an Ariane launch, and the satellite doesn't need the capacity that Ariane offers.
Biting the hand that feeds IT © 1998–2019