Feeds

* Posts by Jason Ozolins

80 posts • joined 19 Nov 2009

Page:

NBN Co in 'broadband kit we tested worked' STUNNER

Jason Ozolins
Meh

100Mb/sec at 100 metres - vectored???

Ever since the LNP decided not to kill the NBN in favour of somehow spending the money they wouldn't save on flood relief, infrastructure or bribing the very rich into bearing children, they've been really hot on the idea of vectored VDSL replacing FTTP.

Turnbull's big claim was that vectored VDSL would allow punters to get up to 100Mb/sec out of their waterlogged copper pairs, so they'll be really stoked to see NBNCo getting these numbers out in front of voters. But vectoring is done on the node end to reduce the effects of crosstalk, and it's not hard to see how a single active VDSL pair is not going to see a lot of crosstalk from VDSL traffic on adjacent pairs.

It would be a lot more reassuring to see a successful demonstration of multiple active pairs running closer to the maximum specified distance between a node and end user premises.

0
0

IBM was wrong to force UK workers off final salary pensions – judge

Jason Ozolins
Devil

Re: Now how many more years...

Just read that Australia has 35% of its over 65s living in poverty. Housing is extremely expensive, and population pressure means that it's not a property bubble so much as an endless shortage. Unless you own your dwelling by the time you retire, you will be hard pushed to pay the rents that are being asked. The new conservative Oz govt also just foreshadowed pension age rising to 70. They should snap up some of the urban brownfields sites as manufacturing industry leaves due to the high cost base and exchange rate. That way they'd have some appropriate land to build new workhouses.

Personally, unless I am lucky enough to have a fantastic job by age 60, I will do everything in my power not to be compelled to slog away for the last years of my life where I have a decent chance of good health and mental capability. I don't need to go on endless overseas trips like some Australian baby boomers seem obsessed with, I just want to spend a few years of decent physical and mental health actually living my own life on my own terms for a change, and I am prepared to live in a very simple way to achieve that. Swapping those ten good years for ten more crappy years of feeble ill health at the end of my life is a worthless bargain. If I later start to run out of money or health or brain, I'll already have the requisite resources for the next step... from Exit International.

0
0

Audio fans, prepare yourself for the Second Coming ... of Blu-ray

Jason Ozolins
Coat

Re: Not Engineering

That article did point out, though, that the NS10's port-less design made for a tighter transient response than other similarly positioned, better-spec'd monitors. That it so often gets described as "brutally unflattering" or similar also suggests that they were considerably more clinical than an average home speaker.

Coat icon is for after I admit that I've only got some old Behringers... I'm not exactly high up the audio food chain.

0
0

No, we're not in an IT 'stockapoclyse' – boom (and bust) is exactly what tech world needs

Jason Ozolins
Devil

Efficient Markets Hypothesis? It's not a hypothesis unless...

...it's falsifiable. As in, it is possible to describe a state of events that, should they be seen to actually occur, would falsify the hypothesis. It's a bit of rigor that adherents to the Dismal Science ought to embrace.

Tested (and as-yet unfalsified) hypotheses eventually get accepted as theories. But the way that free marketeers toss the EMH around, you'd think it ought to be called the Efficient Markets Axiom.

2
0

Anatomy of OpenSSL's Heartbleed: Just four bytes trigger horror bug

Jason Ozolins
Facepalm

Workaround: Clients could refuse to connect to vulnerable websites

Surely if the client SSL library was altered to try this exploit once during certificate exchange, the client could drop the connection if anything extra was returned in the heartbeat request. It's a heuristic thing - the larger the "exploit" request size, the easier it would be for the client to tell that the server was unpatched - but it is at least *something* that could be done at the client end to catch connections to insecure servers.

2
0

Major tech execs fling cash at heretical AI company Vicarious

Jason Ozolins

"Recursive Cortical Networks"

...is the model that Vicarious have publicly claimed to be working on. I wonder whether it's got any relationship to Confabulation Theory?

http://www.amazon.com/Confabulation-Theory-The-Mechanism-Thought/dp/3540496033

Confabulation theory was the first account I came across which seemed plausible from an evolutionary viewpoint - it posits that the same basic architectures that evolved for coordinating muscle movement were then specialized in evolving higher cognitive functions - and in light of the observation that comparatively few layers [think "gate delays" as an EE analogy] of processing can take place between our sensory input and our reactions if we are to react in a usefully short time.

In any case, with 5 out of 8 of their team members

http://vicarious.com/team.html

sporting PhDs, and the other three also sounding like overachieving types, it looks like it would be a very interesting place to work.

1
0

Intel promises 10Gb Ethernet with Thunderbolt 2.0

Jason Ozolins

Re: Not very impressive...

Just as politics is the art of the possible, business is the art of the profitable. Thunderbolt gear is already more expensive than USB 3.0, even with "shonky" copper. Re-using Displayport electronics to drive short length copper cables reduced the cost to vaguely acceptable levels, and also allowed backward compatible re-use of the mini-DisplayPort connector on laptops with very little space for connectors.

There are cases when copper just *wins* from a cost point of view - my workplace has 3500 x86 servers with short passive copper QSFP+ 56Gb/s cables going up to their top-of-rack Infiniband switches. From those switches to the core switches the cables are optical, with active QSFP+ end connectors, and those cables are very expensive. Similarly, racks of servers on 10Gb Ethernet often use passive copper SFP+ cables to get to top of rack switches, with multimode fibre SFP+s back to core switches, if there is no need for full bandwidth from the whole rack back to the core switch.

Anyway, DisplayPort was already quite a cool interface - I've upgraded to a new work Mac and am using my old iMac 27" as secondary monitor; it's a great way to get more life from a nice screen.

0
0

Neil Young touts MP3 player that's no Piece of Crap

Jason Ozolins

I cut my hair with a set of Wahl clippers and combs, and have not paid for a haircut since about 1997. It doesn't take long before you can do the back of your head without having to hold a hand mirror...

0
0
Jason Ozolins

Re: PoC

A few years ago, I remember being quite surprised that the highest bit rate MP3 encoder supplied in iTunes made such a hash of a big booming reverb effect that I could clearly hear the difference from the original, despite my hearing already starting to go to crap. I was pretty familiar with the original track. [Movement in Still Life, UK version - BT is pretty obsessive about his recording technique, FWIW]

So, yeah, MP3 - depends on the developer's commitment to the format. And the program content - distortion-laden guitars (Neil Young, perchance?) are actually really challenging to compress well with perceptual coding, because there's energy *all over the spectrum*, not in neat peaks like for many acoustic instruments. Not sure if your "source is of good quality" proviso was meant to apply in that case... it certainly makes the snobby "give classical stations higher bit rates because golden ears" decisions for BBC DAB radio seem even sillier.

http://en.wikipedia.org/wiki/BBC_National_DAB

1
0

ARM lays down law to end Wild West of chip design: New standard for server SoCs touted

Jason Ozolins

Re: Color me unconvinced

The number of ARM processors shipped vastly outstrips the total number of x86 processors shipped in the same time. I guess that wasn't one of the many broken promises. It would help if you gave some detail on who promised what.

A RISC versus CISC debate, absent any engineering or business considerations, is about as deep and thrilling a dispute as hatchbacks versus sedans, without reference to any real cars. Most of the interesting differences are between particular models (ISAs), not the abstract classes of car (architectural style).

It happens that the Pentium Pro and its many evolutionary descendants decode the more complicated x86 opcodes into RISC-ish uops internally. Seems to work okay for Intel.

2
0

The year when Google made TAPE cool again...

Jason Ozolins

I hadn't thought about DAT for a *long* time, but a quick check of the price for a 160GB native capacity "DAT-320" tape (even the product branding assumes all your data is 2:1 compressible) is somewhere around AUD$35, whereas a 1.5TB native capacity LTO-5 tape is around AUD$60. Looks like I wasn't missing much.

With those sorts of numbers, and an LTO-5 drive costing roughly AUD$2K to a DAT-320 drive at roughly AUD$1K, it's hard to see how DAT could compete against high-capacity tape for a bit more initial outlay, or a removable disk system for even less initial outlay... [but yes, there are some durability issues in that case.]

2
0
Jason Ozolins

Re: tape is cheap, portable, and fast transfer rate

It depends what your disaster recovery scenarios are. If you are storing data offsite because you need business continuity through a disaster that takes out your online data and primary backup, then you have to have a firm plan for how to get the data back intact from offsite tape within your DR window, onto enough surviving hardware to continue operations. If you are mostly storing a copy offsite to guarantee the survival of the archive itself - say, if the primary copy is physically far enough from your online data that it could be lost in a disaster without losing the actual online copy and servers - then you just need to plan how you'll re-replicate the archive at acceptable risk.

My workplace (nci.org.au) is progressively deploying petabytes of online research data storage. At that scale, tape backup has big power/cost/durability advantages, and so that's our chosen medium. But as our primary tape system has to live in our main data centre with the online storage, almost any DR scenario that requires our offsite data copy will necessarily involve significant lead time to buy in more disk and other hardware to replace the failed/destroyed equipment; there is not the budget for continuous availability of this kind of data at this scale through disaster scenarios. Even so, we are working on strategies for minimizing the time to restore from tape, with particular regard to the very wide range of file sizes within our online datasets. Restoring tiny files from tape at media rates requires a lot of metadata IOPS, and we are taking into account the access patterns typical for each dataset before deciding how it should be packaged for long-term storage and backup.

3
0

Junior telcos tie knot in NBN Co copper plan

Jason Ozolins

The market structure is exactly the same except that with FTTN, it is at the moment unclear under what arrangements the last mile medium will be tested, remediated and maintained, and how much of the market will actually be reliably served by FTTN. So it isn't really the same at all.

I'm not bagging FTTB: it sounds like a credible "least worst" option for MDUs and there is always scope for building copper to be renewed; but betting the farm on how well Telstra has maintained its last mile copper over the last ten years is "a brave move, Minister."

1
0

Bigger on the inside: WD’s Tardis-like Black² Dual Drive laptop disk

Jason Ozolins
Linux

Re: Linux support... well, who can say?

It couldn't be a SAS expander - you'd need a SAS controller in your laptop to make use of that. It could well be just a SATA port multiplier chip, as Marvell does make those:

http://www.marvell.com/storage/system-solutions/sata-port-multiplier.jsp

If so, it looks like it comes up in a transparent passthrough mode before the extra driver magic is added to the host. There doesn't look to be a publicly available detailed datasheet for their port multipliers, but it would be interesting to see if a decent SATA controller under a recent Linux kernel would detect the chip.

2
0

Tales from an expert witness: Lasers, guns and singing Santas

Jason Ozolins
Stop

Re: Couple of points

Guess what... it's a bit of both. I have Asperger's, so coordination and social skills weren't my strength. But I could sprint and do long jump, and I rode my bike 9km every school day for most of high school, so I was reasonably fit.

But it was hardly "character building" to have some smug arse of a PE teacher making suggestions about my sexuality when I couldn't cope with the mixed ballroom dancing unit we had to do each winter. Physical contact made me really stressed, particularly when some of the girls we had to partner were actively participating in the bullying that was making it hard for me to do well in the subjects I actually cared about. It was actually the wife of the head P.E. teacher, also a P.E. teacher, who actually took a moment to find out what was going on with me, and arranged for me to go to the library and do something useful, instead of standing outside the hall each lesson to satisfy the vindictive streak of my teacher. Thanks, Mrs Moore... and yes, I somehow managed to get married to a woman and have a family, despite my flamingly effeminate stance against ballroom dancing.

So yeah, I learned some things about character in P.E., but it was mainly about what sort of people I could trust in any way: not the ones who enjoyed causing suffering. Co-ordination is certainly important to your development as a well-rounded person... so just spend some time playing handball, tennis, frisbee and juggling to get over your "motor moronhood"; it's a lot more rewarding than being punched in the nuts in a rugby scrum by people who hate you.

5
0

Eat our dust, spinning rust: In 5 years, it'll be all flash all the time

Jason Ozolins

Re: The disks may go, but the blocks will remain

Blocks are indeed a convenient abstraction, but inside some SSDs they're already getting de-duped and compressed, so there are still possibilities for shifting the division of responsibility between the filesystem and the hardware. TRIM, wear levelling, read disturbance tracking, a raft of alignment hacks to deal with FAT32/MBR legacy brain damage - up until it bit MS when 4K sector HDDs arrived, and they finally abandoned the stupid lie that every disk has 255 heads, 63 sectors and partitions simply must start on a cylinder boundary - all of these are symptoms of the mismatch between a block storage model that tries to cope with any write pattern to arbitrary 512-byte blocks, and the physical realities of easy-to-kill larger programming pages arranged within erase blocks.

Some of this stuff can be handled just by exposing some basic geometry - what alignments and write sizes make sense for the underlying flash, for instance - but a copy-on-write filesystem like ZFS or Btrfs but more specifically aimed at flash, which controlled programming/erase policy, could go around the standard block model. For instance, filesystem defragmentation preening to free up contiguous space on HDDs could turn into a way of freeing erase blocks, and wear levelling falls out as a consequence of the copy-on-write nature of the filesystem.

A machine I worked on, http://en.wikipedia.org/wiki/Vayu_(computer_cluster), had 1500 blade servers each with 24GB of SLC flash SSD, as developed by Sun. The SSD write bandwidths would drop considerably over time, even with aligned 4KB write workloads for scratch storage and swap; there was no TRIM or secure erase support on these SSDs, but we worked out that every month or so we could do a whole-of-SSD blat with large, aligned writes to return each SSD to near its original write speed. Granted, this speaks to the maturity of the SSD firmware that was delivered in 2009 with this machine, but it seems to me that better documentation of the SSD and a better understanding of how the filesystems were hitting that block device could have helped us avoid that performance degradation in the first place.

So, yeah, the block abstraction is a useful one, but it's not without its warts.

2
0
Jason Ozolins
Meh

Speed (bandwidth)? Or acceleration (latency)?

Guess Amazon Glacier has no reason for existing then. After all, who would ever want to wait more than a few seconds to get their data back, even if it then arrives at a decent *speed*.

If your Internet commerce business model really does involve never knowing what (large) pieces of data your clients will instantly need from anywhere in your single-tier all-flash storage setup, I hope that they're paying well for the service...

0
0

Turnbull touts construction resumption in YouTube vid

Jason Ozolins
Meh

So the agreement with Telstra is coming when, exactly?

Whatever work is going on towards FTTN is presumably within the limits of the existing agreements between NBNCo and Telstra, which is to say using their ducts to run fibre (and now power, yes?) to cabinet sites, once those cabinet sites are chosen. Which is fine, until it comes time to trace, test and possibly remediate that last leg copper that David Thodey reckons is good for another hundred years... who knows, maybe he was thinking it could last another hundred years just satisfying the Universal Service Obligation. Dial-up modem, anyone?

The Coalition having waxed so very insistently on how FTTN was the only sensible approach for Australia (that is, after four years of variously insisting that it was unnecessary, that 4G wireless would make the NBN obsolete, and that the money could be better spent on flood relief or proper manly infrastructure like roads), what are the chances that Telstra will play hardball and shift a large part of the unknown cost of copper tracing, testing and remediation back onto NBNCo, while retaining actual ownership of the last leg copper? Pretty good, I'd say.

FTTN might as well stand for "Feed Telstra The NBN".

0
0

FACE IT: attempts to get Oz kids into IT jobs are FAILING

Jason Ozolins
Devil

Fancy stuff is pointless without the basics

The last factoid in the article is the real elephant in the room: basic skills are just not getting the priority they deserve in early education. I'm pretty worried about what my kids bring home from school. Last year, my 6yo was bringing home corrections to her spelling which were not actually correct themselves. "Trisicol" for tricycle, "verander" for verandah - from a teacher who claimed righteously to my wife that teaching was not just a job to her, but a vocation. This year, in year 2, my daughter is doing homework at an age where I and all my classmates had none, and yet I am just not convinced that the general standard of attainment in her class is any better than I saw at that stage of primary school...

Frankly, if you go into IT without a decent command of language, and of discrete maths, you will struggle to collaborate with people effectively, and to bring any rigor to problem solving processes. I'm not talking fancy University maths, just a decent secondary schooling foundation to get you used to a bit of abstract and systematic thought than you can build on as required.

Going on about some magic IT training that will turn kids into the IT workforce of the future is missing the point if the fundamentals are not properly addressed. It also helps to be interested in the subject, because as cracked points out, you will have to keep learning new stuff to stay useful...

2
0

HELIUM-FILLED disks lift off: You can't keep these 6TB BEASTS down

Jason Ozolins

Re: less helium than a balloon

Funny you should say "laws of economics" rather than "economic forces". Because, with the way that the various world economies have been going, it doesn't seem like we actually know the laws well enough yet - unless they are the sort of unfalsifiable laws where whatever happens, that's just what the laws said would happen. Funnier yet, there are psychological studies where students of economics prove to be less altruistic/fairness-minded, and more self-interested, than "ordinary" students in financial dealings... but supposedly the same laws of economics apply to both economists and lesser mortals.

Sure, profit motive will draw private companies to fill the gap after some amount of price flapping and pain among industrial and scientific users of helium. But was it really necessary for the US Government to get out of the helium marketplace in some ideological panic, lurching around smashing stuff on the way out? Oh well, after the US Government shutdown last month, that would seem to be totally par for the course.

0
0
Jason Ozolins

Maximum operating altitude?

Seagate states that most of their drives are designed for a maximum operating altitude of 10,000 feet. If the seals on these helium-filled drives hold at altitudes higher than 10,000 feet, these drives could operate in places where most Seagate drives are not warranted to work. Good for folks in Bolivia, for instance...

2
0

Malcolm Turnbull throws a bone to FTTP boosters

Jason Ozolins

Re: The devil is in the detail

Yes - nowhere have I seen any mention that the Coalition sought, or had, access to Telstra cable records that would let them make more than a wild guess as to how much copper would (in principle, at least) support VDSL.

Add to this the elephant in the room that is Telstra's level of commitment to proper maintenance of their copper network in the last decade. My mother's last house, build in the late '80s, had decent ADSL2 a couple of years ago, until a fault brought out a Telstra tech who, between complaints about how crappy his job had become, mentioned to my brother, "yeah, you probably won't have such good ADSL anymore". Whatever work he did on that line to get it working again, he wasn't lying.

The opinion of Telstra executives at a Senate hearing in 2003 was that their copper network was "five minutes to midnight", and they would only guarantee its function to 2018. An optimist might say that those executives lacked the vision to see that networking technology would eventually find ways to wring decent speeds out of a few hundred meters of copper; but the real question is whether that view of the copper network, and their focus on higher margin mobile services to drive revenue growth, led to such cost pressures that they effectively gave up on maintaining the copper to the standard where VDSL2 speeds were uniformly achievable.

For what it's worth, I see FTTB for multiple dwelling units as one place where it makes sense to use VDSL2. The copper runs are shorter, and hopefully the deployment could be done in such a way as to leave open the possibility of fibre retrofits back to the basement, for tenants who manage to get the body corporate to agree and are prepared to pay for the retrofit.

0
0

It's all in the fabric for the data centre network

Jason Ozolins
Meh

Infiniband, anyone?

Funny how Infiniband has been offering a switched fabric network with separated control and data planes for about a decade, at a price per port that for a long time was way lower than comparable Ethernet (once the Ethernet specs were even drawn up for the link speeds that IB was supporting). Plenty of supercomputers have had single IB connections from compute nodes to a converged data/storage fabric.

Not sure how much 40Gb Ethernet switches are going for, but considering that a basic unmanaged 8 port QDR (== 40Gb/sec signalling, 32Gb/sec data) switch can be had in the USA for less than $250/port, and a 36 port top-of-rack QDR switch with redundant power for about $140/port, I'd be surprised if there were such low entry points for Ethernet switches with comparable bandwidths and software defined networking capability. Even that tiny 8-port QDR switch can be connected into a mesh fabric, and toroidal IB networks with peer-to-peer links to adjacent and nearby racks can allow some degree of horizontal per-rack scaling for deployments growing from small beginnings that cannot justify more expensive core switching.

Granted, this last point is making a virtue of necessity, in that you pretty much *need* to run an Infiniband subnet manager on an external host once you get to a decent size network. The subnet manager that was supplied on an embedded management host with our modular Voltaire DDR IB switch was not much use, as it tended to lock up... it's easier to restart the subnet manager, or switch to a failover backup, if it's running on hosts you fully control. =:^/

0
0

Windows 8.1: Read this BEFORE updating - especially you, IT admins

Jason Ozolins

Re: Usual MS upgrade stuff then...

"Thirdly, wireless broadband is the future and on the basis of downloads of up to 40mps in parts of Australia is very much the present in part. Mind you 25mps is what the previous Government's broadband was slated to be in its first 3 years of full operation."

Whatever revelatory substance it is that they put in Alan Jones' coffee at the 2GB studios, apparently it is being served at your local cafe too.

If you take a second to look at your preferred NBN implementer's choice of technology, as released in the Coalition's NBN policy in April 2013, you will find that there is absolutely no mention of magic wireless that will replace wired deployments in metro areas. Ze-ro mention of magical unicorn+rainbow radio technology to serve city users, just 4G/LTE for rural areas, with lower contention ratios than are designed for city 4G deployments. That is because 4G/LTE, like all the wireless broadband technologies that came before, is subject to the laws of physics, which kind of tie you down to using a crapton of radio spectrum if you want to serve a lot of concurrent users in a given area.

This is why mobile telcos are actually clamping down hard on download limits. Telstra's rate card: http://www.telstra.com.au/broadband/mobile-broadband/plans/ - shows a breathtaking $95/mo for 15GB. But everyone knows *they're* a rip-off, [that was guaranteed by the monopoly status they inherited when the Coalition privatised them, cough]... so surely overseas it's all roses and endless video streaming over 4G? Okay, here's what Singtel has to offer for it's 4G mobile PC-oriented broadband plans:

http://info.singtel.com/business/products-and-services/internet/broadband-laptops-and-tablets

Mmmm, AUD$34 for 10GB of download, with excess data at ~ AUD$9/GB. That's the future, right there! (Assuming you meant the future to be just expensive, instead of very expensive.)

4G will work really well for the things businesspeople want to do when they're on the road. It will not be a magical replacement for wired broadband in metro areas. Nor will whatever follows it. Wired deployments have their own contention issues, but they actually exist in the other direction - there is much more total potential downstream bandwidth than you can afford to carry/switch upstream. But if the business case emerges, you can upgrade the backhaul or switching gear on your wired deployment after the fact; whereas 4G radio technology will stay pretty much set in stone - for a certain amount of spectrum, you'll get a fixed Gbit/sec of total usable bandwidth.

Meanwhile, over in the 2GB part of the collective delusion/bile tank that is Australian commercial talkback radio, Alan Jones will politely refrain from calling his Coalition pals idiots for not heeding the same sage advice about magic radios being the future of broadband that the Labor hacks were so stupid to ignore. Funny about that.

2
0

Object storage: The blob creeping from niche to mainstream

Jason Ozolins
Meh

Re: Object: It's not just about storing stuff...

Yes and no to the "still need a filesystem under the covers". You can pretty much chop the directory tree off a traditional UNIX filesystem and use inode numbers as object IDs. For instance, the Object Storage Targets inside Lustre can be mounted locally on the storage servers for maintenance when the cluster filesystem is down, and what you then see is a placeholder filename for each inode in use, hashed into a containing directory tree to keep the directory sizes manageable. When the cluster filesystem is mounted, most operations refer directly to the inodes - it's only when a new object is created that its placeholder filename has to be added too.

As for efficiency: if you expose inode numbers to the filesystem layer, bypassing the directory tree and addressing inodes directory is certainly no *slower*...

1
0

Microsoft Xbox One to be powered by ginormous system-on-chip

Jason Ozolins
Windows

Memory-mapped frame buffers, old as the hills

Video access to large address ranges of main memory has been around since long before the Amiga. For instance, the Atari 800 and the Commodore 64 - both those had memory-mapped frame buffers which could be set to read from most parts of the RAM.

The Atari 800 custom audio/video chips were IIRC designed by Jay Miner, who went on to design the custom chips in the Amiga. The Amiga had much more CPU memory also addressable by graphics hardware, and added a nifty DMA coprocessor that could do bit-oriented graphics operations over data stored in the 'chip' memory, as well as moving data around to feed the PCM audio channels and floppy controller... but at the core, it was the same kind of architecture, just scaled up.

Things got much more interesting when CPUs got write-back caches; now explicit measures were required to ensure that data written by the CPU was actually in memory instead of just sitting in a dirty cache line at the time the GPU or other bus mastering peripheral went to fetch it. It's all the same cache coherency issues that multiprocessor system architects have been dealing with for years, and in a system like the XBOne, most of the peripherals are more or less peers with the various system CPUs in terms of how they access cached data; in fact, most peripherals look like specialised CPUs, hence the "heterogeneous" part of the HSA. You don't need to explicitly flush CPU caches, or set up areas of memory that aren't write-back cached, in order for the GPU to successfully read data that the CPU just wrote, or vice versa. That's the nifty part.

I'm guessing that the XBOne, like the Xbox 360, will have its frame buffers and Z-buffers integrated on the enormous CPU/GPU chip. That will reduce the bandwidth requirements on main memory by a great deal, as GPU rendering and video output will be served by the on-chip RAM. There are other ways to get some of the same effects - the PowerVR mobile device GPUs render the whole scene one small region ('tile') at a time, only keeping a couple of tiles plus the same size of Z-buffer in on-chip RAM, then squirt the finished tile out to main memory in a very efficient way - but it does create other limitations in how the graphics drivers process a 3D scene; any extra CPU work to feed the GPU takes away from power savings given by the simpler, smaller GPU. Tradeoffs abound.

1
0

Yahoo! Japan drops UPS systems, crams batteries into servers

Jason Ozolins

I'd guess that they handle this in other ways - make another copy of the data on the server, and/or take it out of the load balancing pool before swapping the battery. Paying for redundant hardware on every node to reduce a rare failure mode is the kind of thing the huge scale companies are trying to avoid where possible.

0
0
Jason Ozolins

Re: Still need generators; PUE tricks

UPSs + backup generators + dual power feed/supply for everything is the standard approach if you want to have a battleship-style datacentre that can fight on through disasters - particularly if you are selling space to tenants and have minimal control over their behaviour/system architecture. You sell them a service level, and then you have to maintain it. Their resilience to disasters outside your usual security+power+environment+network obligations is not your problem.

On the other hand, if you are designing a scale-out system that will live across datacentres that you happen to control, you can make all the ducks line up in a different way:

- use multiple sites with diverse power and network feeds

- plan only to ride out short outages at any given site

- have non-redundant power into each rack, and into each server, but diversity in feeds to different racks

- integrate power/network topology + physical placement information into data placement/load balancing algorithms to maintain data redundancy and service availability in the face of failures.

Vertically integrating the hardware, software and hosting of your service means you don't have to pay for double the UPS/generator/power distribution/PSU capacity to achieve service-level redundancy. In this model, most of your servers need maybe a couple of minutes of uptime to ride out small power blips and also let them write out dirty data from RAM. If you treat RAM as nonvolatile, and handle redundant storage at a higher level in your stack, you can use free RAM as write-behind cache and also remove the need for a lot of synchronous filesystem writes, so you get better throughput for write-heavy workloads.

As for hiding the UPS inside the server just hiding the effective PUE of the UPS, consider that instead of building a big, easily serviceable AC->DC->AC UPS that will keep the fussiest of servers running, you get to look at the PSU schematics and build the simplest AC->DC UPS that will suffice to keep that specific PSU's outputs within spec. That's got to help a bit.

1
0

Don't let the SAN go down on me: Is the storage array on its way OUT?

Jason Ozolins
Meh

Cache data where it is most effective

Yes, but caching can be done on SAN clients as well as arrays. Any application that runs against a single-mount filesystem can cache data locally to reduce re-reads. The amount of local cache scales up easily with the number of SAN clients, and filling out DIMM slots with best bang/buck size modules is a cheap way to buy cache. And yes, if the data is on a cluster filesystem then the benefits of client caching depend a lot more on the type of app and the particular filesystem: Oracle RAC for instance manages cache coherence across multiple clients on shared database files at the application level, bypasses OS caching altogether, and AFAIK can pass cached data from one RAC node across a fast interconnect like Infiniband to another RAC node rather than making the second client read from database shared storage; in fact, over Infiniband the requesting client may not even have to make a system call to receive the data.

Cache on storage arrays is much more expensive per byte than local client RAM; on midrange arrays with set amounts per controller it is not that large compared to the total cache available on a few well-sized SAN clients, and for high-end gear like Hitachi virtualizing controllers, cache upgrades cost so much that a couple of years back, the storage admins at my University ended up in a sorry bind where they knew they needed more cache, but simply couldn't raise the money to get the upgrade. This scarce and expensive resource is best used to do things that *can't* easily be done with cache on local clients, like:

- reliable (mirrored, nonvolatile) write-behind caching, for write aggregation, annulment (quickly rewritten filesystem journal blocks, etc), and load smoothing (assuming there's any idle time!)

- speculative readahead of sequential data during idle time; the array is the only thing that can really know if the disks are actually idle

- reducing data read in common across *multiple clients*; for instance, base OS disk images in a copy-on-write VMFS disk hosting setup, or copy-on-write cloned SAN volumes. Or, as on the clusters at my workplace, lots of cluster nodes all reading the same executable and source data when a large parallel job starts. But in that case, the filesystem is all on JBODs, and Lustre object servers are doing the read caching, with terabytes of aggregate cache across all the object servers, at low cost per GB of cache.

As for high-end arrays beating midrange arrays with the same quantity of disk due to lots of cache: apart from cache sizes, the number and speed of host-side and drive-side interfaces on the SAN controllers will certainly make a difference for non-random I/O benchmarks, given a large enough number of disks in the array, and then the controller architecture needs to be capable of feeding those interfaces. There are a lot of ways to get more performance from the same (sufficiently large) number of drives.

1
0

Give them a cold trouser blast and data centre bosses WILL dial up the juice

Jason Ozolins
Coat

Re: Switch heat vets

"~1000mm wide rack"

Bother. Thinking of the width, typed the depth. Actual rack unit width was a bit more than a 600mm tile. Water cooled doors removed >30kW of heat from each rack unit so there was no hot aisle in that system. Until you opened a rack door, that is.

I'll get me coat...

0
0
Jason Ozolins
Meh

Re: Couldn't all this waste heat

Regenerating electricity from low grade heat isn't useful. Using low grade heat as a head start for building heat can be useful. The difference between ambient temperature and the server exhaust air temperature is too low to do much else with it.

You *can* use waste heat from power generation for cooling, though, as the temperature drop to ambient is much higher:

https://en.wikipedia.org/wiki/Cogeneration

(the trigeneration bit of the article)

0
0
Jason Ozolins
Headmaster

Re: Switch heat vets

In HPC, where high power density per rack is the norm (peak I've encountered is 35kW per ~1000mm wide rack, and many racks) the tendency is towards hot aisle containment, with some form of water fed in-row heat exchanger cooling, either in the back doors of the racks themselves, or above the aisle, or in-line with the server racks (APC provided that last sort for the Raijin supercomputer at the <a href="http://nf.nci.org.au/facilities/fujitsu.php">NCI National Facility</a> in Canberra.) In this model the hot air is cooled as close to source as possible, there is no need for air handlers to pressurize the subfloor with cold air, and there are no long return paths for warm air back to air handlers, mixing with ambient air on the way and making it harder to extract heat from the air. Key points:

Heat is concentrated in as small a volume of air as possible in a hot aisle, making it possible to use higher temperature water to feed the in-row coolers.

In-row cooler fans are hopefully producing slight negative pressure in the hot aisles to minimize air leakage through gaps in the racks. This also helps prevent hot air stagnating behind front blanking plates in partially full racks; still hot air is bad because it loses heat by conduction to the rack or adjacent equipment, and that heat can make equipment stressed, or escape out to the room.

Issues of equalizing cold air distribution are reduced when the inlet air is the mixed air from the room with no particular cold vents. The in-row coolers ideally return air to the room only slightly colder than the bulk air in the room. If they do more than that, then the return water is cooler than it needs to be.

The sheer volumes of air needed to cool dense servers become unworkable with a single pressurized floor. With hot aisle containment, the airflow is local to each hot aisle and distributed among the cooler units.

The water returning from the in-row heat exchangers is warm enough that for a lot of the year in Canberra "free" cooling can be used, instead of needing to use heat pumps to get the heat into a lower volume of hotter water going to the cooling towers.

The Raijin data centre itself is not classically cool; more like 25 degrees in the room. The hot aisles are well above 40 degrees C, very noisy, and not nice places to linger in. AFAICR the only room air cooling is to achieve the requisite air changes so humans are not breathing endlessly recycled plastic volatiles.

0
0

Texas school strikes devil's bargain, drops RFID student tracking

Jason Ozolins
Devil

Cameras to watch cameras

Schools have lots of rooms, and lots of corridors. Each needs a camera, but actually you need two cameras that can see each other so as to deter the darlings from destroying the cameras.

A few years back, a workmate's wife moved out of secondary school teaching in a reasonably affluent suburb of Canberra. She was sick of being threatened and physically abused by children whose parents would simply refuse to entertain the idea that their offspring were less than perfect, and who made every escalation up the rung of disciplinary measures into a pitched battle. She would have welcomed cameras in the classroom, and I'm sure that the students who enjoyed similar treatment to her at the hands of the same thugs (of both sexes) would probably prefer surveillance to the ongoing threat of assault.

[Yes, I am bitter, and I do think of the lot I had to deal with in high school and wonder whether as they got older, they continued to enjoy causing suffering whenever they could get away with it. If so, I hope that their smoking habits are starting to catch up with them in various malignant ways, so as to slow them down a tad.]

5
0

ITU readies gigabit G.fast standard for copper's last wild ride

Jason Ozolins
Facepalm

Re: Interference

Anyone who voted your post down is clearly uninterested in the benefits to society of having around a group of people who have the equipment and experience to assist in the sorts of disasters that clobber the phone network, either from damage to plant or massive call congestion.

If you've read about the sort of engineering that went into specifying Category 5 and higher cabling, then you can appreciate on the one hand the technical achievement of getting similar speeds down telephone copper, and on the other that it is a bloody awful medium for high speed comms. And yet, it's still better for large scale fixed deployments than wireless, which is what the telcos would really rather we were all paying through the nose for...

1
0

Unreal: Epic’s would-be Doom... er... Quake killer

Jason Ozolins
Thumb Up

I was impressed with how seriously Epic took the networking issues...

Remembered reading this at the time: here's a nifty archive of old Tim Sweeney posts where he mentions that to get the network code right, he eventually included an "ISP from hell" simulator in the Unreal network code, so that he could get a really good handle on the effects of latency, bandwidth and packet loss problems:

http://floodyberry.com/sweeney/tims_news_1998.html#Unreal_Networking_Code__Status

Pity that the later Unreal patches broke the A3D surround sound support, and replaced the original, characterful weapon sounds for the flak and goop guns with ones that sounded to me like generic "pew-pew" space gun sounds. One day I'd like to put together a machine with the right combo of patches, soundcard, and restored sounds to play that game through once... I can't remember why I never finished it! =:^/

0
0

Zynga banks fluke profit - won't happen again, says CEO

Jason Ozolins
Meh

So they were just another game company after all...

I can' t help remembering the controversy as Zynga executives clawed back stock options issued to employees they saw as undeserving before the IPO.

Somehow the job of running a real business turns out to be more difficult than fleecing a bull market of its silly money. Hubris, meet Nemesis.

0
0

Turnbull 'flat out' seeking NBN killer blow

Jason Ozolins
Facepalm

Re: Of course we should go with FTTP

The idea that you could fund Gonski education reforms or NDIS out of "savings" from a scaled back or cancelled NBN has officially gone out the window, now that both Labor and the Coalition agree on one thing: whatever NBN is built by either side will be treated as an off-budget investment.

Of course, back in 2012 when the Coalition were trying to completely destroy it, the NBN was a "budget black hole", so it's not surprising, indeed really quite convenient, that most folks going to this election will remember Mr Shouty's mantra about how many roads and hospitals could have been built using the money wasted on the NBN, without realising that the Coalition's allegedly cheaper NBN will just deliver a somewhat lower interest bill for the Commonwealth, assuming that the copper takeover is free and all the copper works peachily for VDSL. Cough.

BTW, has anyone heard a reaction from Telstra about how it sees the Coalition's request for all the copper phone services? Do we know how much extra money they are wanting to hand over that gift horse?

0
0

John Lennon's lesson for public-domain innovation

Jason Ozolins
Thumb Up

Re: GPL is copyright

Absolutely agree that GPL relies upon the legal protection of copyright to achieve its end.

Was really happy to see that many presenters at Linux.conf.au 2013 were attributing the images in their slide decks. Haven't been to a conference for ages, so maybe this isn't new, but back in the mid 2000s it was not as common.

Without copyright, the GPL couldn't work, and without the GPL and similar licenses, there'd be no implicit patent grant on code distributed; a company or individual could inject submarine patent claims into free software, wait for it to turn up in a derived product such as an embedded system, then sue for patent infringement. Beats developing your own products...

1
0

Intel pits QDR-80 InfiniBand against Mellanox FDR

Jason Ozolins
Happy

Re: This doesn't make sense

I don't think it doesn't make sense. :-) Looking at this from a sysadmin POV (I'm not an applied maths whiz, but I've worked for and with some):

- Unless your job is embarrassingly parallel, your cluster nodes will need to communicate with each other, not just with the filesystem.

- The pattern and amount of that communication depends on the type and scale of the job.

- As more cores end up inside each compute node, the interconnect has to scale up in speed for some sorts of jobs (definitely for all-to-all patterns) to get the same throughput per core as used to occur when each node had fewer cores. There is also more RAM in each node, and hence more checkpoint data to be saved in the I/O phase - but the I/O phase is likely limited more by the filesystem, unless you're using some fancy two-stage checkpoint setup (i.e. quick dump to dedicated checkpointing system that can then stage it out to the filesystem).

1
0

The universe speaks: 'It's time to get off your rock!'

Jason Ozolins
Coat

@florida1920: earth-orbiting station idea

Depends what you're worried about. A gamma-ray burst that takes out life on earth will also take out any life in orbit around the earth. The "to the starz!" folks are so committed to preserving humanity that they would spend all our resources making sure that we can spread humans so far out that even a gamma ray burst couldn't stop us from using all the resources of other places to keep spreading ever outwards. And so on.

As for the visionary qualities of science fiction, I'm surprised that nobody's mentioned that great Kurt Vonnegut short story, "The Big Space F**k". In it, humanity's last gasp at some demented form of survival is shooting a rocket full of freeze-dried sperm at a space-time wormhole that will send it to somewhere near the Andromeda nebula, in the hope of "finding something fertile out there". Billboards beside roads advertising this grand scheme proclaim "F**k you, Andromeda!". Seems about as much point to it, honestly.

[and yes, Kurt Vonnegut uses the proper swears]

0
0
Jason Ozolins
Flame

Babylon 5 and "all of this was for nothing unless we go to the stars" quote

So, apparently we have to "go to the stars" for anything to be worthwhile. Let's try to clarify this:

How many of us have to go there? You, me, our kids... what if it was just the Murdoch and Koch and Ellison and Gates and Jobs and Putin and Romney and <insert lots more 1% names> families? They're humans, too... and they have a lot more resources at their disposal than most ordinary people do. Why shouldn't they be entrusted with the stewardship of the works of Buddy Holly and Aristophanes, etc? Who gets to be the payload, and who gets to be discarded as early stages of the great rocket of humanity?

Do the ones that go have to be the same species that we are now, or some offshoot of humanity? Will they even care about all that culture? I am trying to convince my son to care about Aristophanes, but he's more interested in Minecraft. How far removed from us could any descendants be and still care about carrying the essence of "us" into space somehow? Do you feel that you are representing for the Homo Habilis crew every time you make a tool?

What about just sending some AI carrying all of recorded human knowledge/culture, which can explain it all for someone else to appreciate? What's so great about my 1,000th generation descendant shrugging at the works of Marilyn Monroe and Buddy Holly before getting back to Space Minecraft version 4982? Perhaps something that is not in anyway descended from my or any other human loins might feel more appreciation for "Peggy Sue"?

I know for a 100% fact that I will die. Sometimes in the meanwhile I get to enjoy being alive just doing my own thing, but apart from that it's all about how I interact with the living, reacting world around me; people, and animals, and even the plants in my garden. Of my small impact on that world, even less would ever have a chance of being known unto these wonder-entities who will carry all the "proper achievements" like Marilyn Monroe out into space and eternity; guess my life is worthless. Oh well.

Flame away, space lovers...

1
2
Jason Ozolins
Devil

Re: Just a thought but..

Presumably an LENR reactor built with the same advanced insight that produced the classic http://en.wikipedia.org/wiki/All_About_Radiation.

Reminds me of the time a local mad guy was advertising at the refectory of the University where I worked for help in building a fusion rocket using his amazing "divide by zero" technology. I'd already found a screed of his that he'd left behind on a bench, and said feat could apparently enable all sorts of amazing technology, instead of just producing an NaN. The mathematical arguments employed to prove that assertion were only slightly more bizarre than the sort that postmodernists use when they try to muscle in on maths or physics. And they get tenure.

1
0

Penguin Computing muscles into the ARM server fray

Jason Ozolins
Headmaster

Cache coherent interconnect would only be useful if...

...the processors have enough physical address bits to allow direct addressing across all the memory attached to the interconnect. Cortex-A9 can only address 4GB in total, so to get anywhere near addressing the memory on 4096 sockets, you'd only be able to put 1MB on each socket, which seems a bit small for today's software... :-)

Also, are you sure you even *want* it? I was somewhat involved with the 1536 processor Altix 3700 system that was installed at my workplace (nf.nci.org.au); it seemed that SGI were keen for us to run it as a few honking big SMP boxes, but the exposure to component failure that you get with a few huge SMPs means it only really makes sense for jobs which necessarily take the whole system. AFAIK that's how NASA ran their Columbia Altix cluster. We ran a big mix of workloads across that number of CPUs, and so a failure that crashed a 512-1024 CPU SMP would have killed a lot of jobs that had no dependency on the failed part.

Even when the Altixes were run as a cluster of 32-64 CPU SMPs, with the same interconnect serving to run MPI between SMP boxes, the cache coherency in the interconnect was still there, and could lead to cascading failures if you didn't shut down a failed SMP box in just the right way; memory shared between SMP nodes for MPI communication was actually cache coherent with the other nodes mapping the same memory, so a failure in one node could cause other nodes to fail if cache lines for other machines' memory got "stuck" on the failed node. Not pretty, and made worse by all the Custered XFS storage fencing that happened as nodes died; if enough fence-outs happened in a short time, it could cause the Brocade director class FC switches to hang, with further ensuing hilarity.

Moral is, be very sure that you want the very tightly coupled thing, because you'll pay for the complexity one way or another...

2
0

Panasonic gets second chance with £4.7 BEEELION bailout

Jason Ozolins
Unhappy

Pity, they have made some really nice stuff...

Panasonic cameras and stereos are among my least regretted purchases... but as I seem to buy it all second hand or on closeout special, I'm of no use to them at all. =:^/

0
0

Microsoft, RIM ink new licensing agreement

Jason Ozolins
Linux

Re: $300K, idiots

Andrew Tridgell (of Samba fame) released two patches for Linux's VFAT support, that avoided infringing the patent that MS were using to threaten TomTom and other companies selling products with embedded Linux that could write VFAT filesystems.

Tridge's second patch for getting around MS' long filename patent made sure that the VFAT code created only long filenames, with no usable short filename equivalent, both sidestepping, and showing the tiny actual value, of the incredible "innovation" that Microsoft trumpeted in its publicity about the TomTom case.

This behaviour is fine as long as you are using Win95 or above, which is to say, basically everybody anywhere. It's long past time to stop even pretending that there is a value in keeping compatibility between modern hardware and dinosaur versions of MS-DOS. Stupid disk geometries, brain-dead 8.3 filenames, tiny memory limits... if you absolutely must have continued access to Borland Sidekick or DONKEY.BAS, run that crap in a VM.

The irony about all this innovation that MS clobbers people with, is that if you want a FAT filesystem laid out properly on an SD card, on no account format it on anything before Windows 7 (maybe Vista with a later service pack?) - use any decent brand camera instead. (Snow Leopard gets it wrong too, btw).

Camera manufacturers have been padding the FAT filesystem structures to make them align properly with the underlying flash erase and program cells for many years now, but only recently did the genii at MS realise that starting the first partition on sector 63, with no reserved sectors to align the clusters properly either, was a great way to make your SD cards wear faster and run like dogs. More pointless backward compatibility, hurting people in the here and now.

0
0

'Stuff must be FREE, except when it's MINE! Yarr!' - top German pirate

Jason Ozolins
Thumb Up

Re: "I can be anything on the Internet"

Really good point.

Also, for anything you say that is really contentious, people will make it their mission to expose the real person behind "littlegreencrocodile27@hotmail.com", so you might as well just use your real name and stand behind what you say.

The only exception to this is if you go up against people who will try to hurt you in real life, but that requires a whole other level of commitment to maintaining a separate online identity. Whimsical mask wearing isn't much help at that point.

0
0

Climate denier bloggers sniff out new conspiracy

Jason Ozolins
WTF?

META-FAIL

This is one of the least helpful comments on El Reg.

So, would you care to say *how* it's horribly written?

Long, excessively adjectival, self-indulgently luxuriant run-on sentences?

Or sentence fragments?

Or too many short paragraphs?

Or is it just that you don't like the suggestion made by the researcher who is the subject of the article?

One thing you can say for the pure laissez-faire outlook of "The world will be better off as a whole if each individual agent acts in accordance with its own interests" is that it's a blanket statement, and essentially unfalsifiable. Things not working out? Must be over-regulation. There'll always be enough of that evil regulation in any real country or financial system to save the LFers from ever having to admit that the market failed. Everything that moves markets in crazy ways is short-term noise, everything that moves them in a sane ways is the eventual, inevitable correction by the Invisible Hand of the Market, assuming everyone gets sufficiently the #$(@ out of its way. It's a powerful, sustaining faith with little room for doubt or nuance.

As for the links between that and "conspiracist ideation"... they are probably best uncovered by conspiracist ideators.

26
7

Page: