This number is maximum power. At idle or low utilisation it will be much, much lower.
1391 posts • joined 6 Sep 2007
Re: I remember when AMD used to be competitive
There are reasons to expect that AMD will benefit significantly from Vulcan and DX12 , when nVidia will stagnate. There is only one benchmark for DX12 at the moment (at ArsTechnica) where much older card R9 290X is a match to 980Ti . The reason for this being that AMD has parallel pipelines, which scale much better with jump to Vulcan and DX12 than serial pipelines implemented by nVidia. All of that hardware in AMD was underutilised under DX11. If this proves to be correct, then this card will eat Titan X for breakfast on DX12 games.
Of course it's a bet and of course, we are years away from DX12 being sufficiently popular. Which leaves plenty of time for both AMD and nVidia to design/build/sell a new generation. Nevertheless, perhaps the tables are turning for AMD and all the investment they made in hardware design could start paying off. Eventually. If the don't go bankrupt first.
Re: Cough.. cough... cough...
@Voland's right hand assuming you are right, it is very surprising they did it like this. Hope to read more on this subject.
Re: Very small market at that price
Nope, there are also people who want Fury X but without water cooling (and with slightly lower power budget). E.g. cases where water cooling for Fury X won't fit, CrossFire etc.
Re: Cough.. cough... cough...
I think only vents at the back of the card (i.e. PCI bracket) are the exhaust, look at the direction of radiator strips. So, yeah, vast majority of this heat will be pushed outside.
I wonder how
... would look benchmarks of this card under DirectX 12, against best of nVidia.
lots of time have passed
.... for those who wonder what I've chosen in the end : I've been successfully running for nearly a year following stack:
- Arch Linux running as a headless hypervisor, where I configure, build & sign my own packages for software stack mentioned below, when and as I feel like upgrading them
- kernel build closely following current version from www.kernel.org , only little behind for sake of ZFS on Linux (currently 4.0.9, waiting for ZOL release 0.6.5 before upgrade to 4.1)
- ZFS on Linux, current release + occasionally a patch or two (currently 0.6.4.2 with single patch from pull request 3344)
- kvm with vfio GPU passthrough, AMD GPUs passed to Windows 7 (two GPUs, two Windows 7 VMs, plus some more VMs without GPU, all have qemu-agent installed). Linux console on serial port only (and of course ssh access). Linux radeon drivers are blacklisted
- qemu currently version 2.3.0 will upgrade soon to 2.4.0 (or perhaps 2.3.1, if I do not like it)
- libvirt with libvirt-guests to start and shutdown the VMs at the right moments. Patched libvirt-guests a little to use --mode=agent when shutting guests down
- VMs disks are setup as ZVOLs on ZFS, all VMs are snapshotted every now and then (alongside with user files, below)
- A filesystem on the same ZFS pool is shared under Samba as fileserver for user files
- Also using ZFS for Linux root, home and build directory (see top point)
- Samba 4.2.3 running on a separate pocket-size ("next unit of computing", as Intel calls this format) PC as an AD controller, to which both Samba running on host and Windows 7 guests are attached as members. A second AD controller (also Samba 4.2.3) is running under a VM, just in case
- zfs send | zfs receive, run occasionally to separate ZFS pool as backup (offline when not doing backup)
Re: Mitigating the problem for end users
One way to do it, assuming you have the right mix of hardware (CPU with VT-d, enough cores and RAM, right kind of GPU) is to make your main machine a virtual machine with GPU passthrough, running on top of Linux hypervisor with stack kvm/vfio/qemu/libvirt.
The thing is that Kindle edition of a newspaper is quite limited format - it would be difficult to put any ads into it. Well, at least in the edition that I can read on Kindle Paperwhite. And if they do that after all, I can cancel my subscription, shed a tear or two, and go back to reading on large screen.
I'd pay as well, and there even is one thing that ElReg could offer me in exchange for my money: a daily copy of all articles in the form of Kindle news subscription, just like I receive other newspapers. Just something to read on my commute to work.
I know it sounds like favouritism towards Amazon, however I have nothing against such a daily news delivery mechanism made available on other platforms/vendors where such paid-for news subscriptions are available. It's just that I already happen to use Kindle for my daily news review. I also know I could use Instapaper to scrap ElReg articles and copy them to my Kindle, but I'd rather let ElReg earn some money by preparing this for me - and making it appear just as a regular news from one source (called "The Register", rather than Instapaper).
Even better if sister site ThePlatform implemented such a mechanism as well, they have some very interesting articles which I'd very much prefer to read on an ebook than from large screen (and I do not like wasting paper on printouts). Preferably at different time of the day than ElReg one, giving me something to read on the other direction of my commute ;)
Re: salted duplicate check
rather than comparing against passwords of other users, the comparison should be against an existing password dictionary - i.e. something that both researchers and blackhats would use to brute force hashes which may potentially leak from the database. I say "potentially" because it's the same as with home insurance - you do not want this to happen, you do not really expect it, but when it does happen you are prepared. Although I have to admire that AM used bcrypt, which gave the passwords good protection and greatly reduced the rate of brute force attack on hashes.
checksumming is cheap, only few fast instructions are needed for that, low memory overhead and tiny (when no errors) I/O overhead. Absolutely no reason to do anything "custom" for it to work, because your bottleneck was and will remain in I/O. It is deduplication (online one) which is hard.
Re: ZFS on LInux
@Nigel 11 you are mistaken, ZOL runs in Linux kernel. If you look carefully there is a number of kernel modules, among them spl (GPL-licensed Solaris Portability Layer) and zfs (actual filesystem). It does use user mode (notably tools for the user zpool and zfs, and calls from kernel to /sbin/mount to handle automatic mounting of snapshots) but the actual filesystem runs in kernel mode only.
Which is why it is actually possible to do both 1) boot from ZFS (difficult and little benefit) 2) keep your root filesystem i.e. / on ZFS (not difficult, lots of benefits).
Re: The modern day YACC?
I think that, actually, there is one good reason why we "need" this. It is checksumming.
In the "olden" days when the most common filesystem features were designed (tree structure, file attributes, security etc.), disk space was scarce but disk reliability, compared to amount of stored data, was quite fine (and if it wasn't, you were supposed to restore data from backup). I think it is slightly different now : disks are cheap (compared to price of other components), amount of stored data grew by many orders of magnitude and disk reliability has fallen behind. Worst of all, because of the data recovery logic implemented in disk firmware, they now will take "educated guess" at the state of data and then silently move it to spare sectors/blocks, rather than tell you to restore data from backup. Hence, bitrot is taking place at a much higher rate (compared to size of whole datasets) than anticipated by architects of early filesystems. And it is occurring silently.
I am surprised that so few filesystems have checksums now; if a new filesystem is needed to demonstrate how important this feature is, so be it. It might not be popular or even useable, but it might just prompt authors of other, more established filesystems, to eventually implement this feature.
Re: @ Martijn Otto - You mean btrfs, surely
purist GPL zealots may be unhappy with the idea
FTFY, because CDDL is open source. Unfortunately because it's not GPL, it cannot be included in kernel source tree - but it can be (and has to) built when deploying your own kernel. There are source (as opposed to binary) packages included in major distributions just for this.
I guess those thinking software RAID worse than hardware are stuck in the 90s, when overhead from software RAID (e.g. as implemented in Windows NT) was having an impact on writes in the operating system. In this case we have dedicated storage appliance, where by definition all hardware resources are for one purpose only, i.e. running ZFS filesystem. Using software RAID (rather than "hardware" one) makes it easier to scale these resources, since it is basically running on more or less generic hardware platform.
Re: Am I misreading this?
Well, the assumption is that on Windows, Administrator has lower rights than LocalSystem (latter would be root equivalent, while the former is not). Of course, since Administrator has enough rights to install any software including services running under LocalSystem, that is nothing else but an interesting way to make the hack a little less obvious (because who would suspect Windows Update ?)
Actually, this seems like a good use for (very expensive, but still cheaper than DRAM) 3D XPoint chips.
nice to see
sadly, I do not think Theresa or David will learn from this lesson anything ... or that this leak will stop spying program in Germany either.
Re: Guess I'll stick to 7 until 2020...
The funny thing about systemd, I actually like unit files and few other things it brought strictly related to initialisation management. It is really big shame that it's put its fingers into so many pies, e.g. when dbus goes wrong (sometimes it does!) I am unable to gracefully shutdown the system because silly bugger is unable to communicate with init process without dbus.
On the other hand, it does provide some entertainment, watching all these bone headed-attempts to move dbus to kernel, in a least efficient way possible. I wonder what systemd and gnome authors will try to copy from Windows next, badly (Cortana a.k.a. "universal privacy invasion", perhaps?) Sorry for off topic, I heading out anyway
Re: It's pretty bad... Really
I suppose you could set a policy to enforce settings, i.e. store the right data in registry. Still, thanks for sharing where to put it! I suspect I might need this, one day ...
Re: Missed one mystery
Yes it is, makes it really good candidate for write cache. You won't be able to run general purpose software directly on this memory though, due to endurance limit (very high, but still). Unless you clasify "firmware" as "general purpose" ;)
Re: layer limit
Good point, however: experience so far points that evolution of fabrication processes is very focused on increased yields. Which makes sense, since this is where economy of silicon fabrication comes from. There is no reason to think this should be any different for either of V-NAND or 3D XPoint. Thus I would expect stacking (of both) to slowly increase, perhaps in 2 or 3 years cycle, until some other limit is hit (e.g. current needed to support more layers).
Re: And NO software changes
This is no NVRAM, you cannot simply use as RAM any technology which has any endurance limit. No matter if its 1000x larger than NAND. This is "merely" another tier in write cache of your regular data storage, or at best (and I'm thinking that's a niche product - see ZeusRAM) actual data storage.
Re: Manufacturing capacity
This probably won't do anything bad to 3D Flash. This technology is more expensive that NAND, you are not going to get economy of scale with imaginary terabyte-sized drives in this technology (without of flash). What you might get instead is terabyte-sized drives made with 3D Flash and with insane write speeds/IOPS thanks to large (tens of GB) write buffer in XPoint.
If XPoint is going to be used as "cheaper and non-volatile" DRAM then it does have endurance problem. This can be made manageable by memory controllers, but that's another step to increased memory latency.
What we have now appears to be simply another tier for large, fast and safe buffer of NAND writes (article does mention NVME on PCIe), which is great but it does not exactly warrant abandoning research into DRAM replacement technologies.
For example, if such "DRAM replacement" technology (e.g. memristor) had latencies at two orders of magnitude lower than current DRAM (and apparently XPoint), it would enable immense jump in CPU (and GPU) performance by dropping the requirement for cache (see The Platform).
watch this space
They guy seem guilty but the evidence seem to be lacking. I am sure we will hear more about this.
I would guess it is not that small. Size of single phthalocyanine molecule is apparently around 1nm, I would guess that what we have on the picture is not actually smaller than 10nm.
Re: A TV tax based on income?!
I guess it goes like this : you stop paying BBC directly and instead BBC is subsidised, to the tune of £4bln, from your taxes. Well, they might not admit it, but this is how it would end up at the end.
... reluctant to increase the cost of the BBC for the middle classes who most use it, but who are currently subsidised by low income license fee payers
numbers or it didn't happen. Just a reminder, a middle class is someone who can afford monthly subscription to Sky or Virgin Media, and thus is not limited by free terrestial TV.
Also, paying same price for same service hardly seem "subsidising" to me, I guess some Guardian reader must have put this sentence for you?
Re: Hardware manufactures futures in Microsofts hands...
Good point; for one I cannot wait to get my hands onto unholy duo of Radeon Fury X2 and Windows 10 ;)
there is still hope for AMD.
1. IBM is making inroads into 7nm process, in partnership with Global Foundries, and we may expect first production chips in this process in 2018 (although 2019 seems more likely)
2. Global Foundries is the manufacturer for AMD chips
3. Intel has slipped 10nm process until 2017 and will certainly slip 7nm beyond 2018
4. AMD has invested into ARM which is starting (slowly) to make inroads into servers, where Xeon is currently king
lossless streaming is joke
Qobuz, stream in lossless formats at CD quality
If you listen to soprano parts on "Mozart: Messe En Ut Mineur" by Philippe Herreweghe, on Qobuz, you will hear nasty (and very obvious) clipping. I don't know about Tidal, they probably do not have this record anyway. Original CD does not have any clipping (I have it).
I would guess that lossless streaming services have entered the loudness war (or perhaps they were always there, I just failed to notice before). For me that was reason enough to cancel my subscription. For music I really like I still buy CDs, thanks.
I do subscribe to Spotify for casual listening, and I do not have expectations of great quality, or even great choice.
Re: "there cannot be "no go" areas"...
As I would consider any conversation in the privacy of my home.
Yes I understand spooks may get a warrant to bug me - let them, but they still need to bother with 1. judicial supervision 2. actual effort placing targetted bug. There must be no shortcuts for these two, that's what this is all about.
it's head of Donald Duck, obviously
Re: People still use Vista?
"as long as they aren't hurting anyone else..."
That's the trouble with all these unpatched machines connected to Internet. They do not intentionally hurt anyone else, but these hordes of zombie PCs have very real use to criminals. I am toying with the idea that harm by negligence should be applied to owners of such computers, but of course since law is only applied locally it wouldn't help much anyway...
you've got to love it
El Reg tells me there's an Windows update coming my way, before Windows does. There is also Flash update I neglected by a day and another Adobe Reader update, and then also QuickTime and Thunderbird on one of the computers but not on the other (must have installed it earlier). Apparently Secunia PSI does not update software by itself and I still need to run the updates myself, perhaps I should try the CSI or maybe move to Ninite, which unfortunately does not support as many programs as Secunia does.
And so I stay up until midnight patching my both Windows machines, and am almost late for work today. But at least my machines are updated, and updating kernel to 4.0.8 and bunch of other software running on Linux hypervisor on the same occasion went smoothly.
Thank you, El Reg.
Yup exactly, that's what is interesting about this technology - removing heat faster and more efficiently from inside the chip will make it work at lower temperatorus, that is more efficiently. Faster GPUs, CPUs and memory - ahoy!
But, as I wrote aboute - we have no idea when this will come to fruition (if ever)
That's interesting, but of course this being research there is no way to tell how many years are we from commercial applications (if at all)
Funny thing, Amazon does exactly this for majority of search terms. I guess "MTM" received special treatment and that is not cool.
Re: Their ambition is rubbish...
Android is Dalvik JVM with native bits (supported on BB10) and Google Play services (not in BB10, yet). Without those, B10 phones can and do run large selection of Android apps at native speed (which is not very fast, because of JVM overhead)
The thing is that BB10 security model is much better than that of Android and also that many BB applications (calendar messages etc) are better than in Android. They do not support kitchen-and-sink integration but they integrate well enough with external world, and they are also very well integrated with all other apps and settings of the phone.
Basically the whole article is about plans to improve support for Android apps within BB10 which is nothing special and is consistent with BB strategy until now. There is no suggestion that BB would cease to support BB10 or pull its apps, and that would be nonsensical thing to do anyway because it would remove at least two huge selling points of BB phones: better security and better PIM apps
Re: No need for that
Good comment. One thing though - it is not as if Snowden could verify that his knowledge is up to date before releasing this information, right? It is quite possible that NSA came to same conclusion some 2 years ago and already took steps to improve the security of their network. Possibly by ditching autofs.
Re: Rather scary
These headers have been "auto generated from AMD internal database." (source : commit comment). There is no point reviewing those.
But I do understand why one might want to do it.
Re: One MEEELION lines…
@Tomato42 admittedly it's a little bit funny, but it's true (so , not joke)
Re: 410,000 lines in the AMD register description header?
yeah and these all are headers. My guess it's "magical" consts and structures recognized by AMD GPU which until now lived only in proprietary AMD driver sources.
So, I had a look at that single large commit (removed 9,838 ; added 453,818 lines) in 4.2-rc1 and it's interesting one, here, because of the following 2 new drivers:
- virtio-gpu KMS only pieces of driver for virtio-gpu in qemu. This is just the first part of this driver, enough to run unaccelerated userspace on. As qemu merges more we'll start adding the 3D features for the virgl 3d work.
- amdgpu a new driver from AMD to driver their newer GPUs. (VI+) It contains a new cleaner userspace API, and is a clean break from radeon moving forward, that AMD are going to concentrate on. It also contains a set of register headers auto generated from AMD internal database.
This is going to be useful.