21 posts • joined Wednesday 20th June 2012 06:13 GMT
DRM promotes interoperability? Wrong!
It's sad that the CEO of the W3C would be so wrong.
The Encrypted Media Extensions would standardize the APIs for the content decryption modules, but it won't standardize the content decryption modules (CDMs). It can't standardize the CDMs, because that would make the encryption scheme unworkable, due to pre-existing W3C policies on open source. Therefore, the content "protected" by the CDMs would be restricted to "applications inaccessible to the Open Web or completely locked down devices." Exactly what Jaffe said he didn't want.
EME will not make the web more open. It will only make life easier for people who try to restrict users' freedoms. It's the <video> tag all over again, but even worse.
Re: Oh Dear
Of course, he's referring to the N9. The N900 runs Maemo, and was released before Elop became CEO of Nokia.
For one thing, there's the matter of scale. Shuttleworth is bankrolling Ubuntu, but he has only about $500 million to play with. Microsoft has over $60 billion to use in any way as long as a regulator doesn't stop them.
For another thing, Microsoft rarely competes on the merits. They use unfair contracts and misleading advertising to spread into new markets. To develop a market, you need to spend time in it, iterating and improving your product, until you have something good. Most companies need to be profitable pretty early in this process, but Microsoft has demonstrated that they will lose billions of dollars to conquer a new market. It's exceedingly unfair to compete against a player using negative margins, and it has destroyed many innovative people.
IBM used to be the evil empire, so we used to hate them. Their management was incompetent, so they fell to morally neutral status a while ago.
Re: Solving PI vs. time to mine one Bitcoin...
If you really want to understand Bitcoin, then you really should study from the people who work on it, such as http://bitcoin.org/en/how-it-works
Question 1: Is mining Sisyphean?
I can't tell whether Bitcoin mining is Sisyphean. The story of Sisyphus is a morality tale about the futility of undermining the rules of the gods. The problem with Sisyphus was that his ultimate opponent had divine powers. Bitcoin has no gods as enemies, as far as I know. Likewise, not everybody is involved in Bitcoin for the monetary reward.
The calculations are not to slow the speed of the expansion. They do, but their purpose is to reinforce the integrity of the Bitcoin system. Bitcoins are traded from one wallet to another via transactions. Transactions are listed in blocks of transactions, in a chain of blocks going back to the first block. Zero or more transactions are bundled together in a block, along with some additional data such as the identity of the previous block, and the block is validated as being part of the block chain by having a header with the appropriate difficulty.
The hard part is finding a header with the appropriate difficulty. The difficulty is adjusted up and down depending on the speed of finding the previous several blocks, so each block takes on average 10 minutes. The Sisyphean aspects are that:
1) The more miners working on the blockchain, the less likely you are to find a block, so your mining equipment becomes less valuable. CPU mining is already worthless, and GPU mining should become unprofitable due to competition with FPGA and ASIC.
2) The reward for finding a block is a combination of the transaction fees and the new Bitcoins themselves. The number of new Bitcoins is gradually decreasing, by design, until just under 21 million Bitcoins will be generated by 2140. Assuming the Bitcoin system is still operational at that time. Miners will still have an incentive to mine because they receive the transaction fees, but that depends on Bitcoin becoming an active currency with enough volume to make the transaction fees significant.
Question #2: Could Bitcoin be used for utilitarian purposes such as SETI or Pi?
It would have to be a different protocol, not Bitcoin, because Bitcoin is inextricably linked to the use of cryptographic hashes to find headers of appropriate difficulty. I guess you could make a newer version of Bitcoin that uses a more difficult hash, if SHA256 turns out to have a fatal flaw within the next hundred years, but it doesn't work for general-purpose computing.
The great thing about the cryptographic hash is that anybody can verify the headers, much more cheaply than the headers take to calculate. You could hypothetically mine using SETI, but there's no way for somebody to independently verify that the SETI-miner has done the work. They would have to download the same blocks from SETI and do the calculations again. That's a lot of work and a single point of failure.
Also, solving pi is an extremely impractical method of compression. For one thing, there is no "solution" to pi because it's infinite. For another thing, to use pi for compression, you would need to store the "solution" somewhere so you could use it for compression, which would more than wipe out any savings. Finally, because it's infinite, the number of bytes you would need for the index is also infinite, so in general you don't save any space in the transmission. I'm sure better mathematicians than I have proven it somewhere.
Re: Dumb question
I gather that it's designed to get significantly harder to generate a new bitcoin as more are brought into existence, with a theoretical maximum number that could ever exist.
So eventually a new bitcoin will require near-infinite processing power.
The algorithm sets a hard limit of 20,999,999.9769 BTC. That's part of the design. The Bitcoin system generates blocks at a fairly steady rate. (It temporarily increases the speed when new mining hardware comes online, and it temporarily decreases the speed when mining hardware goes offline. Likewise, the processing power to find a block goes up, on average, as new mining hardware comes online.) But the number of Bitcoins per block decreases until, eventually, there will be no more Bitcoins generated. Currently, the reward is 25 BTC per block. Block #6,929,999 will generate 0.00000001 BTC, and then there will be no more BTC generated. To say that you are making more BTC after that would be to engage in fraud, and it would not be recognized as valid by the Bitcoin network.
The new Bitcoins go to the miner who first makes a solution to the mining algorithm, but that's not the primary purpose of the mining operation. The primary purpose of the mining operation is to validate transactions as having happened. Sort of a very expensive COMMIT operation. The transactions are grouped into the blocks, and the miner receives the transaction fees that are attached to the transactions. Thus, mining should remain profitable after the end of new BTC because the miner receives the transaction fees.
This was actually a pretty clever solution. By dribbling out Bitcoins using an algorithm, Satoshi solved the problem of distributing money without a central authority. By rewarding transaction blocks with new BTC, Satoshi created the incentive for people to start validating transactions in the days before there is a profitable volume of BTC transactions.
Of course, that's assuming nobody finds a major flaw in the algorithm (certainly not for want of trying), or the economic conditions drastically change (World War IV as Total War?), or people become disillusioned with Bitcoin for some reason. Bitcoin has already survived several flaws in implementation of Bitcoin services, and even a flaw in the core Bitcoin program, but it's still vulnerable. Everything is vulnerable, but Bitcoin is relatively new and people are more aware of its vulnerability.
Re: Dumb question
No, like any other commodity, the price will fluctuate according to the laws of supply and demand. It's independent of the cost of producing it. Right now, there is an alarmingly high demand for Bitcoins, so the price is pretty high.
The algorithm is designed to produce Bitcoins at a steady rate by adjusting the difficulty of the algorithm to the availability of miners. The miners have an incentive to mine because each block they successfully validate gives them a certain number of Bitcoins. But the cost of mining depends on the cost of acquiring the hardware, the cost of the electricity to run it, and the cost of the Internet connection to connect it to the Bitcoin network. And, as a practical matter, there is also the cost of spending time on Bitcoin mining instead of doing something else.
Many miners mine because the costs for them are lower than the wealth they can receive by validating the blocks. So that drives investment into Bitcoin mining. But the more miners go into Bitcoin mining, then the difficulty goes up to keep the Bitcoin production steady. Essentially, you get more people competing for the same resources. The return on investment for the less efficient miners eventually becomes so low that they drop out, which provides a sort of balance to the system. Some people will continue to mine just for the fun of it, but a lot of the mining is done for a profit.
The amusing part about this story is that the script kiddy is doing CPU mining. CPU mining became unprofitable a couple of years ago. The cost of electricity and the opportunity cost of setting it up has made CPU mining impractical compared to other techniques. In recent months, specially built chips (ASIC) have come online, so now a majority of the Bitcoin mining proceeds are going to people who invest in these devices that can do nothing but mine Bitcoins. Soon, a majority of GPU miners are also going to go offline, just because it will be unprofitable to mine with GPUs.
In summary, you have it backwards. The inflation of Bitcoin doesn't depend on the processing power to mine it. The inflation of Bitcoin depends on the algorithm. The processing power loosely depends on the exchange rate, which is dependent on people's valuation of the currency.
Amazon and Google, please work together.
Google Shopping is almost useless. Retailers pay to be included, and they generally have poor selection and high prices.
Amazon Search is almost useless. It frequently returns items not related to the search terms. It's getting better, but it's still a chore to sift through. Also, sorting by price is a bad joke.
In my ideal world, I would use Google Shopping to search for things from Amazon.
No, the problem is still there.
"The problem that gave rise to VP8 and WebM, namely Mozilla declining to support H.264 in 2010 for fear that it might be target of an patent bomb from MPEG LA, the overseer of MPEG IP, has therefore passed."
No, the problem is still there, and you're an idiot, Richard Chirgwin.
The problem is that every implementation and every commercial use of H.264 require a paid license, which is incompatible with free software licenses such as the GPL and many open source licenses.
Mozilla finally caved on the use of H.264 in playback because the vast majority of systems where Firefox is installed have paid licenses of H.264. It's different for bare-metal systems such as Firefox OS, and it's different for encoders and potentially commercial uses such as WebRTC. If Mozilla required H.264 in these situations, then they would be adding new obligations to users, which is incompatible with the principles of free software.
This basic incompatibility between H.264 and Mozilla's mission is probably why Microsoft and their pet Nokia are trying to kill VP8.
Re: Doesn't Nokia have a point?
"Given I take H.264 video off my HD camera, edit it, encode it with x.264 and play it back all within Linux I'm not sure you understand the situation"
No, I'm not sure that you understand the situation.
To use H.264 legally, you need a license. You need a license for the codec, and you need a license to use it.
Windows can use H.264 because Microsoft pays a license for the codec. Flash can use H.264 because Adobe pays a license. MacOS can use H.264 because Apple pays a license. Android and Raspberry Pi can use H.264 because their respective vendors buy licenses. Mozilla is finally doing H.264 in Firefox because the vast majority of Firefox installations are on systems for which somebody paid a license.
For Linux, in general, nobody paid a license. The x264 developers are flagrantly ignoring the patents, and the developers strictly come from countries with lax patent law enforcement. Mainstream Linux distributions are reluctant to add H.264 because the license requirement is incompatible with the GPL and even the BSD license.
Then, to publish videos with H.264, you also need a license. For now, MPEG LA is letting people use H.264 for personal purposes, but do anything commercial and you're supposed to pay.
That's not to mention all the submarine patents that could derail H.264, just like Nokia is trying to do with VP8.
Morons like streaky are why screen resolutions haven't increased in 10 years.
The point of having so many pixels is so you do not see individual pixels. That's what Apple was advertising with the Retina branding, and what Google is now calling the Chromebook Pixel.
I don't want to see pixels. I've seen plenty of pixels. I want the pixels to be so small that I see smooth fonts and sharp pictures. That's the point of having such sharp screens.
Re: So many issues I hardly know where to start...
"1. Most obvious question – even if you're a big Chrome fan, why not buy a MacBook Air and access your favorite Google apps and services from it without giving up the benefits of local capabilities?"
Most obvious answer – Because you need to maintain the OS on a MacBook. You don't need to maintain the OS on a Chromebook. Just think of the children. Or the parents.
Except for not being compatible with any apps or devices, so you have to retrain them on the Google Cloud way of doing things. You can't have everything. Mac users should already be familiar with this.
Who wants a federal monopoly?
What a weird strawman, Mr. Orlowski. I don't think any reasonable person wants a federal monopoly. I certainly don't want the same organization that gave us the TSA to give me last-mile Internet. What I tend to hear is that Google, et al, want the last mile of Internet access to be regulated and opened to competition, but still privately owned and maintained.
Orlowski says Americans have a reputation for being doers instead of whiners, but existing regulations mean we just aren't allowed to do. As a resident of San Francisco, I'm sure you know of Monkeybrains' attempt to bring micro-trench fiber to the city. Well, they couldn't figure out how to file the paperwork to get their construction approved. Sonic.net is able to move forward only by turning themselves into a phone company and adopting all the regulations that are involved with that, which means no naked DSL.
I suspect that Cyrus Farivar at Ars is a bit anxious about the Comcast thing because it's a 6-month or 12-month promotional deal. At the end, he'll have to face the choice of paying 2-3 times as much for the same service, or having his service cut to 1/2 or 1/4 of its current speed. He knows he won't get anything comparable from AT&T, so he can't threaten to leave to get better prices. Also, frankly, $45 for 24 Mbps is pathetic compared to many other places.
Re: No surprise, I predict that there will be more to come
Well, my plan for achieving migration is saying, "I will make it happen," on my network. If you are a network administrator, now it is YOUR personal duty to enable IPv6 connectivity on your network. IPv4 was deployed by millions of individual decisions to join the Internet. IPv6 will be deployed by the same.
In my section of the USA, the ISPs are trying to eliminate the home router market. When you get new Internet service from Comcast or AT&T, you get a combination modem and wireless router device, too. The upside is that the routers that they've starting shipping in the last few months support IPv6. This means the homes in the USA are gradually shifting to IPv6, without the consumers having to learn new technology. This is a positive development.
Embrace, Extend, Extinguish
"[Microsoft] maintains it wants to make WebRTC more flexible… That makes the technology more readily adaptable to a given developer’s needs, but it also limits interoperability."
Good to see that some parts of Microsoft are still up to their old tricks.
I suspect that one reason SIP never caught on is that it allows a bewildering array of codecs. If you have a SIP client from one vendor and want to communicate with a SIP client from another vendor, you have to carry so many codecs, most of them patented. Skype is just simpler to use and more reliable. So, I see Microsoft is trying to preserve the value of the investment in Skype.
Finkelstein old-fashioned, irrelevant?
I find any Worst CEO of 2012 list that does not include Stephen Elop to be seriously suspect. I mean, doubling down on a losing strategy, while tossing your institutional knowledge overboard, seems like it should be a reason Why Smart Executives Fail.
But castigating Zuckerberg for not wearing a 19th Century period costume is pretty low. Zuckerberg signals that he doesn't care about the traditional finance people. In fact, he doesn't. As long as he has controlling shares, it doesn't really matter what other people think, as long as he doesn't break any laws. If Zuckerberg can reduce his mental burden by wearing a hoodie every day, then he can concentrate his energy on stuff that really matters to his shareholders (especially himself).
On the other hand, what has Sydney Finkelstein done? Trained a bunch of executives? Made friends with the 1%? I require better reasons why I should pay attention to Finkelstein instead of Zuckerberg.
UPS? So lucky!
My company just moved into a new facility. I work in the AV department, and one of the items I put on my wish list was a UPS system. The company hired a consultant to build the AV system, and he denied the request, saying, "The power from the utility is reliable."
Um, part of my work is digital recordings. I have to buy my own UPS units. :(
Automatic memory management
Of course, this sort of problem is absolutely impossible in a system with automatic memory management, because the programmer has no direct access to pointers. For example, Java.
Re: Update on exit
It's true, you really don't know.
Linux (and other modern Unix-like systems) has the concept of the inode that is separate from the filename. The filename is merely a link between the directory structure and the inode, and a file can have more than one link. When the number of links reaches 0, then the file is deleted.
So, when a program is running, it creates an in-memory link to the inode. It's possible to remove the file from the directory structure, deleting it, but it will still be on disk because the in-memory links keep the number of links from reaching 0.
It's not perfect, if you consider badly written programs. Some programs depend on files that load after the program loads, and the in-memory link thing can cause confusion. Many times people have been working on a file, deleted the old version, hit save, and then found that the new version was not there. That's because the program was holding onto the file's inode, and didn't verify that the inode still had a link to the directory structure when the user hit save.
Everybody's doing it
No surprise, Skype and Symantec having problems getting people to install their latest software. I hate them both.
But everyone's doing it. Apple demands that you leave your Mac unusable for a long time while it installs updates. (Apple on Windows also proactively shuts down whatever you're doing so it can update. So horrible.) Java, ATI, and Adobe sometimes try to sneak some unwanted ad-ware on your computer. Mozilla randomly plays 20 questions with you about your plugins.
Part of the problem with Windows is that it is impossible to remove open files. So, you can't do an in-place update, and you actually need a time when the program isn't running to do the updates. In Linux, you can do an update at any time, and then restart the program when convenient.
No IPv6 = no sale
I wouldn't recommend any router that doesn't come with IPv6, such as the Asus RT-N56U.
World IPv6 Launch was earlier this month. It's an essential feature. If the router doesn't have IPv6, at least it needs a nice, stable third-party firmware project.
- IT bloke publishes comprehensive maps of CALL CENTRE menu HELL
- Analysis Who is the mystery sixth member of LulzSec?
- Nine-year-old Opportunity Mars rover sets NASA distance record
- Prankster 'Superhero' takes on robot traffic warden AND WINS
- Comment Congress: It's not the Glass that's scary - It's the GOOGLE